Category Archives: Solaris

Migration of a Single PDB to a Different Endian Platform – Is it Possible?

Seeing that I do lots of migrations throughout the course of my job, its always good to explore some of the newer features so that you are familiar with them and feel good about putting them into your tool-bag for future use. One of the things that I could see being useful is the ability to migrate a single PDB from one platform to the other. Upon researching the possibilities, I found that migrating and converting the endianness of a PDB really wasn’t documented. What was documented was a process to migrate a PDB from one platform to another when endianness is the same between platforms.

Given that, I wanted to see if the process for converting an entire container using transportable tablespaces could be applied to a single PDB. Unfortunately there is downtime required, but here is what I found:

Create Backup of Source PDB

Determine for a given user what tablespaces are impacted, check for any violations which may occur due to transport, set those tablespaces to read-only and then invoke RMAN by connecting to the PDB and take a backup:

SQL> select * from v$transportable_platform;
PLATFORM_ID PLATFORM_NAME                            ENDIAN_FORMAT      CON_ID
----------- ---------------------------------------- -------------- ----------
          2 Solaris[tm] OE (64-bit)                  Big                     0

SQL> exec SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list => 'HR_TEST', incl_constraints => TRUE);

PL/SQL procedure successfully completed.

SQL> SELECT * FROM transport_set_violations;

no rows selected


export NLS_DATE_FORMAT='DD-MON-YY HH24:MI:SS'
rman target sys@xttpdb

RMAN> alter tablespace hr_test read only;

using target database control file instead of recovery catalog
Statement processed

RMAN> alter tablespace users read only;

Statement processed

RUN
{
ALLOCATE CHANNEL XPLATFORM_BKUP_DISK1 DEVICE TYPE DISK;
ALLOCATE CHANNEL XPLATFORM_BKUP_DISK2 DEVICE TYPE DISK;
BACKUP TO PLATFORM 'Linux x86 64-bit' AS COMPRESSED BACKUPSET FILESPERSET 1
   FORMAT '/export/home/oracle/backups/%d_xplat_bs%s_piece%p_%T_%U.bkp'
   DATAPUMP FORMAT '/export/home/oracle/backups/transport_tblsp_hr_test_meta.dmp'
   TABLESPACE HR_TEST, USERS;
RELEASE CHANNEL XPLATFORM_BKUP_DISK1;
RELEASE CHANNEL XPLATFORM_BKUP_DISK2;
}

allocated channel: XPLATFORM_BKUP_DISK1
channel XPLATFORM_BKUP_DISK1: SID=752 instance=xttsun1 device type=DISK

allocated channel: XPLATFORM_BKUP_DISK2
channel XPLATFORM_BKUP_DISK2: SID=745 instance=xttsun1 device type=DISK

Starting backup at 29-OCT-20
Running TRANSPORT_SET_CHECK on specified tablespaces
TRANSPORT_SET_CHECK completed successfully

Performing export of metadata for specified tablespaces...
   EXPDP> Starting "SYS"."TRANSPORT_EXP_XTTSUN_aahC":
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/COMMENT
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TRIGGER
   EXPDP> Master table "SYS"."TRANSPORT_EXP_XTTSUN_aahC" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TRANSPORT_EXP_XTTSUN_aahC is:
   EXPDP>   /u01/app/oracle/product/12.2.0/dbhome_1/dbs/backup_tts_XTTSUN_66977.dmp
   EXPDP> ******************************************************************************
   EXPDP> Datafiles required for transportable tablespace HR_TEST:
   EXPDP>   +DATAC1/XTTSUN/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/hr_test.302.1054902701
   EXPDP> Datafiles required for transportable tablespace USERS:
   EXPDP>   +DATAC1/XTTSUN/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/users.435.1054891719
   EXPDP> Job "SYS"."TRANSPORT_EXP_XTTSUN_aahC" successfully completed at Thu Oct 29 13:14:42 2020 elapsed 0 00:02:17
Export completed

channel XPLATFORM_BKUP_DISK1: starting compressed full datafile backup set
channel XPLATFORM_BKUP_DISK1: specifying datafile(s) in backup set
input datafile file number=00018 name=+DATAC1/XTTSUN/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/hr_test.302.1054902701
channel XPLATFORM_BKUP_DISK1: starting piece 1 at 29-OCT-20
channel XPLATFORM_BKUP_DISK2: starting compressed full datafile backup set
channel XPLATFORM_BKUP_DISK2: specifying datafile(s) in backup set
input datafile file number=00017 name=+DATAC1/XTTSUN/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/users.435.1054891719
channel XPLATFORM_BKUP_DISK2: starting piece 1 at 29-OCT-20
channel XPLATFORM_BKUP_DISK2: finished piece 1 at 29-OCT-20
piece handle=/export/home/oracle/backups/XTTSUN_xplat_bs36_piece1_20201029_14ve6dm4_1_1.bkp tag=TAG20201029T131156 comment=NONE
channel XPLATFORM_BKUP_DISK2: backup set complete, elapsed time: 00:00:03
channel XPLATFORM_BKUP_DISK1: finished piece 1 at 29-OCT-20
piece handle=/export/home/oracle/backups/XTTSUN_xplat_bs35_piece1_20201029_13ve6dm4_1_1.bkp tag=TAG20201029T131156 comment=NONE
channel XPLATFORM_BKUP_DISK1: backup set complete, elapsed time: 00:00:35
channel XPLATFORM_BKUP_DISK1: starting compressed full datafile backup set
input Data Pump dump file=/u01/app/oracle/product/12.2.0/dbhome_1/dbs/backup_tts_XTTSUN_66977.dmp
channel XPLATFORM_BKUP_DISK1: starting piece 1 at 29-OCT-20
channel XPLATFORM_BKUP_DISK1: finished piece 1 at 29-OCT-20
piece handle=/export/home/oracle/backups/transport_tblsp_hr_test_meta.dmp tag=TAG20201029T131156 comment=NONE
channel XPLATFORM_BKUP_DISK1: backup set complete, elapsed time: 00:00:01
Finished backup at 29-OCT-20

released channel: XPLATFORM_BKUP_DISK1

released channel: XPLATFORM_BKUP_DISK2

RMAN> exit

Copy Backup Files and Metadata File to New Platform

Using normal methods such as scp, copy the files generated by RMAN to the new platform. One caveat, is that until 19c, the data pump file for the metadata is copied to $ORACLE_HOME/dbs so you must make sure that you have sufficient space there for the process to complete successfully.

Restore / Convert Endian on New Platform

The last step in the process is the setup of the new PDB, creation of the user in the PDB and then use RMAN to transport the table spaces. The general process can be done simply by executing the following steps:

  • Create empty pluggable database and create target user in the new PDB:
SQL> select * from v$transportable_platform;
PLATFORM_ID PLATFORM_NAME                            ENDIAN_FORMAT      CON_ID
----------- ---------------------------------------- -------------- ----------
         13 Linux x86 64-bit                         Little                  0

SQL> CREATE PLUGGABLE DATABASE xttpdb ADMIN USER pdb_admin IDENTIFIED BY Welcome123 CREATE_FILE_DEST='+DATA';

Pluggable database created.

SQL> CREATE USER "HR" IDENTIFIED BY VALUES 'S:904B950378D93630E729A1A3051796718B7511BA10F5273F2F031E5CF76D;T:22D35135BC49EEAF5F4C896F26E2E8802D869C6451F39241BB4ACB76A38B7396E1C516DC78F0AFB1D3A96ABB761543673ABCF540CAC36927564F892FC148B436934AD38AD6D35C1EB8F3BB55F70BB02A'
DEFAULT TABLESPACE "USERS"
TEMPORARY TABLESPACE "TEMP"
PASSWORD EXPIRE
ACCOUNT LOCK;

SQL> GRANT "RESOURCE" TO "HR";

Grant succeeded.
SQL> GRANT CREATE SESSION TO "HR";

Grant succeeded.
SQL> GRANT ALTER SESSION TO "HR";

Grant succeeded.
SQL> GRANT UNLIMITED TABLESPACE TO "HR";

Grant succeeded.
SQL> GRANT CREATE SYNONYM TO "HR";

Grant succeeded.
SQL> GRANT CREATE VIEW TO "HR";

Grant succeeded.
SQL> GRANT CREATE SEQUENCE TO "HR";

Grant succeeded.
SQL> GRANT CREATE DATABASE LINK TO "HR";

Grant succeeded.
SQL> GRANT EXECUTE ON "SYS"."DBMS_STATS" TO "HR";

Grant succeeded.
SQL> ALTER USER "HR" DEFAULT ROLE ALL;

Grant succeeded.

  • Invoke RMAN and connect to the target PDB:

export NLS_DATE_FORMAT='DD-MON-YY HH24:MI:SS'
rman target sys@xttpdb

RUN
{
ALLOCATE CHANNEL XPLATFORM_BKUP_DISK1 DEVICE TYPE DISK;
ALLOCATE CHANNEL XPLATFORM_BKUP_DISK2 DEVICE TYPE DISK;
RESTORE
   ALL FOREIGN DATAFILES TO NEW
   FROM BACKUPSET '/archive/xttlnux/XTTSUN_xplat_bs36_piece1_20201029_14ve6dm4_1_1.bkp'
   BACKUPSET '/archive/xttlnux/XTTSUN_xplat_bs35_piece1_20201029_13ve6dm4_1_1.bkp'
   DUMP FILE FROM BACKUPSET '/archive/xttlnux/transport_tblsp_hr_test_meta.dmp';
RELEASE CHANNEL XPLATFORM_BKUP_DISK1;
RELEASE CHANNEL XPLATFORM_BKUP_DISK2;
}

using target database control file instead of recovery catalog
allocated channel: XPLATFORM_BKUP_DISK1
channel XPLATFORM_BKUP_DISK1: SID=512 instance=xttlnux2 device type=DISK

allocated channel: XPLATFORM_BKUP_DISK2
channel XPLATFORM_BKUP_DISK2: SID=615 instance=xttlnux1 device type=DISK

Starting restore at 29-OCT-20

channel XPLATFORM_BKUP_DISK1: starting datafile backup set restore
channel XPLATFORM_BKUP_DISK1: specifying datafile(s) to restore from backup set
channel XPLATFORM_BKUP_DISK1: restoring all foreign files in backup piece
channel XPLATFORM_BKUP_DISK1: reading from backup piece /archive/xttlnux/XTTSUN_xplat_bs36_piece1_20201029_14ve6dm4_1_1.bkp
channel XPLATFORM_BKUP_DISK2: starting datafile backup set restore
channel XPLATFORM_BKUP_DISK2: specifying datafile(s) to restore from backup set
channel XPLATFORM_BKUP_DISK2: restoring all foreign files in backup piece
channel XPLATFORM_BKUP_DISK2: reading from backup piece /archive/xttlnux/XTTSUN_xplat_bs35_piece1_20201029_13ve6dm4_1_1.bkp
channel XPLATFORM_BKUP_DISK1: restoring foreign file 17 to +DATA/XTTLNUX/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/users.929.1055079821
channel XPLATFORM_BKUP_DISK2: restoring foreign file 18 to +DATA/XTTLNUX/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/hr_test.928.1055079877
channel XPLATFORM_BKUP_DISK1: foreign piece handle=/archive/xttlnux/XTTSUN_xplat_bs36_piece1_20201029_14ve6dm4_1_1.bkp
channel XPLATFORM_BKUP_DISK1: restored backup piece 1
channel XPLATFORM_BKUP_DISK1: restore complete, elapsed time: 00:00:02
channel XPLATFORM_BKUP_DISK2: foreign piece handle=/archive/xttlnux/XTTSUN_xplat_bs35_piece1_20201029_13ve6dm4_1_1.bkp
channel XPLATFORM_BKUP_DISK2: restored backup piece 1
channel XPLATFORM_BKUP_DISK2: restore complete, elapsed time: 00:00:26
channel XPLATFORM_BKUP_DISK1: starting datafile backup set restore
channel XPLATFORM_BKUP_DISK1: specifying datafile(s) to restore from backup set
channel XPLATFORM_BKUP_DISK1: restoring Data Pump dump file to /u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/backup_tts_XTTLNUX_32652.dmp
channel XPLATFORM_BKUP_DISK1: reading from backup piece /archive/xttlnux/transport_tblsp_hr_test_meta.dmp
channel XPLATFORM_BKUP_DISK1: foreign piece handle=/archive/xttlnux/transport_tblsp_hr_test_meta.dmp
channel XPLATFORM_BKUP_DISK1: restored backup piece 1
channel XPLATFORM_BKUP_DISK1: restore complete, elapsed time: 00:00:00

Performing import of metadata...
   IMPDP> Master table "SYS"."TSPITR_IMP_XTTLNUX_CFfw" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_XTTLNUX_CFfw":
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/COMMENT
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TRIGGER
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   IMPDP> ORA-39082: Object type TRIGGER:"HR"."SECURE_EMPLOYEES" created with compilation warnings

ORA-39082: Object type TRIGGER:"HR"."UPDATE_JOB_HISTORY" created with compilation warnings

   IMPDP> Job "SYS"."TSPITR_IMP_XTTLNUX_CFfw" completed with 2 error(s) at Thu Oct 29 13:44:46 2020 elapsed 0 00:00:32
Import completed

Finished restore at 29-OCT-20

released channel: XPLATFORM_BKUP_DISK1

released channel: XPLATFORM_BKUP_DISK2

RMAN> exit

At this point, the tablespaces have been converted and plugged into the target PDB and all objects are ready for use!

Interesting Items of Note

Interestingly, after performing the above conversion and migration without any issues, and as I normally do, I started to poke around to see what “really” happened. When I did so I found something quite interesting. Upon looking at “v$datafile” I found that while RMAN did assign OMF filenames at restoration time, it assigned a totally different directory to those files than all of the other files in the PDB.

SQL> select file#, name from v$datafile;

     FILE# NAME
---------- ----------------------------------------------------------------------------
	29 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/system.932.1055165099
	30 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/sysaux.927.1055165099
	31 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/undotbs1.931.1055165099
	32 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/undo_2.925.1055165403
	33 +DATA/XTTLNUX/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/hr_test.919.1055165973
	34 +DATA/XTTLNUX/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/users.922.1055166029

As you may have noticed, there are 2 uuids present in ASM for the same PDB, “B2A8DA5A8E2A777EE0540010E05D13EA” and “B2E8839B69346990E05302EC090A3057”. Upon looking at the source system, it appears that despite specifying “TO NEW” in my RMAN RUN block, it still carried over the uuid from the old system. Of course there is nothing wrong with this other than files not being in the same directory and potentially causing an administration problem, but if you would like to fix it, normal methods apply one of which is “alter database move datafile ….”. The results of which can be seen below:

SQL> alter database move datafile 33;

Database altered.

Elapsed: 00:00:08.40	
SQL> alter database move datafile 34;

Database altered.

Elapsed: 00:00:00.15
SQL> select file#, name from v$datafile;

     FILE# NAME
---------- -----------------------------------------------------------------------------
	29 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/system.932.1055165099
	30 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/sysaux.927.1055165099
	31 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/undotbs1.931.1055165099
	32 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/undo_2.925.1055165403
	33 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/hr_test.957.1055166369
	34 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/users.919.1055166403

6 rows selected.

As you can see, all of the files now reside within ASM under the same UUID and are also still OMF. Hopefully this will help you decide if this is an appropriate migration method to get you to PDB on a different platform!

Ephemeral Ports and Oracle RAC Interconnect

Building upon my previous post “Ulimit nofiles Unlimited. Why not?“, ensuring that there are enough Ephemeral Ports for Interconnect Traffic is important.  As a matter of fact, it’s part of the prerequisite checks for RAC installation.  But what are they and why are they important? For purposes of this post I won’t deal with excessive interconnect traffic, but instead I will deal with latency caused by the lack of ephemeral ports.

Simply explained, ephemeral ports are the short-lived transport protocol ports for Internet Protocol (IP) communications.  Oracle RAC interconnect traffic specifically utilizes the User Datagram Protocol (UDP) as the port assignment for the client end of a client-server communication to a well-known port list (Oracle suggests a specific port range) on a server.  What that means is, when a client initiates a request it choose a random port from ephemeral port range and it expects the response at that port only.  Since interconnect traffic is usually short lived and frequent, there need to be enough ports available to ensure timely communication.  If there are not enough ports available for the needed interconnect traffic, interconnect latency can occur.

In this case, while performing a healthcheck on a system we immediately noticed an issue with the interconnect manifesting itself in AWR as long ping times between nodes as seen in dba_hist_interconnect_pings:

ping-stats-4hr-before

In my experience, in a system that has the interconnect tuned well and not a ton of cross instance traffic, these should be in the 5ms or less range.

So we started to check all of the usual suspects such as NIC and DNS issues.  One thing we did find was that the private addresses were missing from /etc/hosts so that could contribute some of the latency right?  While adding those helped smooth out the spikes, it didn’t address the entire issue.  So as part of my problem solving process, next I just started poking around with the “netstat” command to see if anything turned up there and most certainly something did:

udpNoPorts-Example-pre

udpNoPorts… What’s that and why is that so high?  That led me to check prerequisites and ultimately ephemeral port ranges.  Sure enough, they were set incorrectly so the the system was waiting on available ports to send interconnect traffic thus causing additional latency.

Oracle specifies the following settings for Solaris 11 and above.  They are set using “ipadm set-prop”:

"tcp_smallest_anon_port"  = 9000
"udp_smallest_anon_port" = 9000
"tcp_largest_anon_port"  = 65500
"udp_largest_anon_port" = 65500

In this case they were set to:

"tcp_smallest_anon_port" = 32768
"udp_smallest_anon_port" = 32768
"tcp_largest_anon_port" = 60000
"udp_largest_anon_port" = 60000

If you were using Linux, the configuration parameter in “sysctl.conf” would be:

 
net.ipv4.ip_local_port_range = 9000 65500 

After correcting the settings, we were able to cut the latency in half:

ping-stats-4hr-after

Interconnect post fix

Now, we still have some work to do because the interconnect latency is still higher than desired, but we are now likely dealing with the way the application connects to the database and is not RAC aware vs. an actual hardware or network issue.

Incremental improvement is always welcomed!

Updating Solaris DNS via SVCCFG

It’s been a while since I have made a blog post. Time has certainly been ver precious. Believe me, I have no shortage of topics to cover either.

Recently, while doing some of my routine checks, I discovered one of of the SuperClusters that I support had the DNS of all Global and Local Domains pointing to a DNS server over 2000 miles away. How that happened isn’t entirely important, and I know we can all agree this isn’t a great idea, especially when there is a DNS in the local data center.

So let’s look at the best way to fix this. Solaris 11 makes it pretty easy using the “svccfg” command, right?

svccfg -s network/dns/client listprop config
svccfg -s network/dns/client setprop config/nameserver = net_address: "([primary dns] [backup dns])"
svcadm refresh dns/client
svccfg -s network/dns/client listprop config

Well there you have it. It all set now and forever? Actually, no. Upon further investigation, I found that the “/etc/resolv.conf” file was not updated with the new changes. Why was that? Further research yielded that there was actually one more command to execute and it wasn’t located any where near the first set of commands in the documents:

nscfg import svc:/network/dns/client:default

After I had added that simple command to the set that had been executed first, the “/etc/resolv.conf” file now reflected the new values.

Hope this helps.