Category Archives: Database Migration

Tuning the PostgreSQL “work_mem” Parameter

In my previous post “Three Configuration Parameters for PostgreSQL That Are Worth Further Investigation!“, I introduced three PostgreSQL parameters that I feel are the most “neglected” and present the most opportunity for performance tuning in a PostgreSQL instance. In the first of what will become a three part series, I would like to demonstrate tuning the “work_mem” parameter.

The “work_mem” parameter optimizes database operations such as:

  • sorts
  • bitmap heap scans
  • hash joins
  • materialized common table expressions (WITH statements)

To get started, lets create a test table with 100M rows of data:

CREATE TABLE public.work_mem_test (
		id int PRIMARY KEY, 
		value numeric,
		product_id int,
		effective_date timestamp(3)
		);
INSERT INTO public.work_mem_test VALUES ( 
		generate_series(0,100000000), 
		random()*1000,
		random()*100,
		current_timestamp(3));

CREATE INDEX prod_value_idx ON public.work_mem_test (product_id);
VACUUM ANALYZE public.work_mem_test;

We will then run an explain analyze with the “COSTS, BUFFERS, VERBOSE” options so that we can fully see what is going on with the query. For demonstration purposes, I have set the “work_mem” to the lowest possible setting of 64kB. In addition, so that we don’t get the variability of parallel processing I have set the “max_parallel_workers_per_gather” to zero to disable parallel processing. Most systems may also experience better gains than this test case as this was a very small 2 vCPU / 8GB Google CloudSQL PostgreSQL instance:

set session work_mem to '64kB';
set max_parallel_workers_per_gather = 0;
set effective_io_concurrency = 20;
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
SELECT * FROM public.work_mem_test
WHERE id BETWEEN 10000 AND 100000
OR product_id BETWEEN 100 AND 200
ORDER BY value NULLS FIRST;

---------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=2640424.97..2641959.63 rows=613866 width=27) (actual time=16593.228..16778.132 rows=589379 loops=1)
   Output: id, value, product_id, effective_date
   Sort Key: work_mem_test.value NULLS FIRST
   Sort Method: external merge  Disk: 24248kB
   Buffers: shared hit=6 read=363702, temp read=15340 written=16486
   I/O Timings: read=4120.104
   ->  Bitmap Heap Scan on public.work_mem_test  (cost=7805.12..2539439.23 rows=613866 width=27) (actual time=82.454..15413.822 rows=589379 loops=1)
         Output: id, value, product_id, effective_date
         Recheck Cond: (((work_mem_test.id >= 10000) AND (work_mem_test.id <= 100000)) OR ((work_mem_test.product_id >= 100) AND (work_mem_test.product_id <= 200)))
         Rows Removed by Index Recheck: 48506004
         Heap Blocks: exact=2058 lossy=360975
         Buffers: shared hit=6 read=363702
         I/O Timings: read=4120.104
         ->  BitmapOr  (cost=7805.12..7805.12 rows=614306 width=0) (actual time=80.340..80.342 rows=0 loops=1)
               Buffers: shared hit=6 read=669
               I/O Timings: read=17.683
               ->  Bitmap Index Scan on work_mem_test_pkey  (cost=0.00..1280.95 rows=82638 width=0) (actual time=12.680..12.680 rows=90001 loops=1)
                     Index Cond: ((work_mem_test.id >= 10000) AND (work_mem_test.id <= 100000))
                     Buffers: shared hit=3 read=247
                     I/O Timings: read=7.831
               ->  Bitmap Index Scan on prod_value_idx  (cost=0.00..6217.24 rows=531667 width=0) (actual time=67.657..67.657 rows=499851 loops=1)
                     Index Cond: ((work_mem_test.product_id >= 100) AND (work_mem_test.product_id <= 200))
                     Buffers: shared hit=3 read=422
                     I/O Timings: read=9.852
 Query Identifier: 731698457235411789
 Planning:
   Buffers: shared hit=39 read=2
   I/O Timings: read=2.588
 Planning Time: 3.136 ms
 Execution Time: 16811.970 ms
(30 rows)

Time: 16903.784 ms (00:16.904)

EXPLAIN, via the BUFFERS keyword gives us the following data points:

Rows Removed by Index Recheck: 48506004
Heap Blocks: exact=2058 lossy=360975


Execution Time: 16811.970 ms

This essentially means that the 64kB of work_mem can hold 2058 blocks in the bitmap structure within that work_mem size. To get the remainder of the results, everything that falls out of that bitmap
are lossy blocks, meaning that they don’t point to an exact tuple, but to rather a block with many tuples. The recheck condition then checks that block for the tuples the query is looking for.

The following formula is a starting point, but may or may not give you the exact setting needed based on various factors. Since we used the lowest possible work_mem, the setting becomes a multiple of that:

new_mem_in_mbytes = 
 ((exact heap blocks + lossy heap blocks) / exact heap blocks) * work_mem_in_bytes / 1048576
= ceil(round(((2058 + 360975) / 2058) * 65536 / 1048576,1))
= 11MB

Note: In most cases, I have found that this formula has worked well on the first pass, however as you will see in the subsequent tests, this estimated work_mem setting wasn’t quite close to the actual amount needed and this is likely due to a mis-estimate by the planner

Reducing the Lossy Block Access

So for the next test I will increase the “work_mem” to 11MB and re-execute the test.

set session work_mem to '11MB';
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
SELECT * FROM public.work_mem_test
WHERE id BETWEEN 10000 AND 100000
OR product_id BETWEEN 100 AND 200
ORDER BY value NULLS FIRST;

---------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=2160340.77..2161875.43 rows=613866 width=27) (actual time=11382.002..11574.572 rows=589379 loops=1)
   Output: id, value, product_id, effective_date
   Sort Key: work_mem_test.value NULLS FIRST
   Sort Method: external merge  Disk: 24232kB
   Buffers: shared hit=23329 read=340379, temp read=3029 written=3034
   I/O Timings: read=3618.302
   ->  Bitmap Heap Scan on public.work_mem_test  (cost=7805.12..2090832.53 rows=613866 width=27) (actual time=185.251..10923.764 rows=589379 loops=1)
         Output: id, value, product_id, effective_date
         Recheck Cond: (((work_mem_test.id >= 10000) AND (work_mem_test.id = 100) AND (work_mem_test.product_id   BitmapOr  (cost=7805.12..7805.12 rows=614306 width=0) (actual time=132.954..132.957 rows=0 loops=1)
               Buffers: shared hit=675
               ->  Bitmap Index Scan on work_mem_test_pkey  (cost=0.00..1280.95 rows=82638 width=0) (actual time=4.449..4.450 rows=90001 loops=1)
                     Index Cond: ((work_mem_test.id >= 10000) AND (work_mem_test.id   Bitmap Index Scan on prod_value_idx  (cost=0.00..6217.24 rows=531667 width=0) (actual time=128.503..128.503 rows=499851 loops=1)
                     Index Cond: ((work_mem_test.product_id >= 100) AND (work_mem_test.product_id <= 200))
                     Buffers: shared hit=425
 Query Identifier: 731698457235411789
 Planning:
   Buffers: shared hit=30
 Planning Time: 0.179 ms
 Execution Time: 11611.071 ms
(26 rows)

Time: 11695.952 ms (00:11.696)

With 11MB, we got more exact heap blocks, but still not enough memory to process. Applying the formula based on the execution plan of the query….

new_mem_in_mbytes = 
 ((exact heap blocks + lossy heap blocks) / exact heap blocks) * work_mem_in_bytes / 1048576
= ceil(round(((164090 + 198943) / 164090) * 12582912 / 1048576,1));
= 24MB


Let’s increase just a little bit more to 24MB as the latest iteration of the formula has suggested.

set session work_mem to '24MB';
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
SELECT * FROM public.work_mem_test
WHERE id BETWEEN 10000 AND 100000
OR product_id BETWEEN 100 AND 200
ORDER BY value NULLS FIRST;

---------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=1709385.33..1710919.99 rows=613866 width=27) (actual time=3651.589..3791.250 rows=589379 loops=1)
   Output: id, value, product_id, effective_date
   Sort Key: work_mem_test.value NULLS FIRST
   Sort Method: external merge  Disk: 24232kB
   Buffers: shared hit=23329 read=340379, temp read=3029 written=3031
   I/O Timings: read=1493.162
   ->  Bitmap Heap Scan on public.work_mem_test  (cost=7805.12..1639877.09 rows=613866 width=27) (actual time=348.261..3201.421 rows=589379 loops=1)
         Output: id, value, product_id, effective_date
         Recheck Cond: (((work_mem_test.id >= 10000) AND (work_mem_test.id = 100) AND (work_mem_test.product_id   BitmapOr  (cost=7805.12..7805.12 rows=614306 width=0) (actual time=188.309..188.311 rows=0 loops=1)
               Buffers: shared hit=675
               ->  Bitmap Index Scan on work_mem_test_pkey  (cost=0.00..1280.95 rows=82638 width=0) (actual time=5.090..5.091 rows=90001 loops=1)
                     Index Cond: ((work_mem_test.id >= 10000) AND (work_mem_test.id   Bitmap Index Scan on prod_value_idx  (cost=0.00..6217.24 rows=531667 width=0) (actual time=183.215..183.215 rows=499851 loops=1)
                     Index Cond: ((work_mem_test.product_id >= 100) AND (work_mem_test.product_id <= 200))
                     Buffers: shared hit=425
 Query Identifier: 731698457235411789
 Planning:
   Buffers: shared hit=30
 Planning Time: 0.242 ms
 Execution Time: 3828.215 ms
(25 rows)

Time: 3912.670 ms (00:03.913)

No more lossy block scans! Time has also reduced quite significantly from the first execution.

Handling the Sort Method

Now, we need to pay attention to the explain plan line:

 "Bitmap Heap Scan on public.work_mem_test….rows=613866 width=27" 
 "Sort Method: external merge Disk: 24232kB."

Some of the sort is in memory and some is spilled to disk. So in order to fit the entire rowset in memory, we must multiply the input rows by the width, which is 16MB. In addition, the planner spilled another 24MB to disk, so let’s add that also to “work_mem”.

16Mb + 24Mb which is being spilled = 40Mb more "work_mem"

So with the current “work_mem” of 24MB plus the additional computed to remove the sort (rounded up), the total needed is 64MB.

Lets run one more test:

set session work_mem to '64MB';
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
SELECT * FROM public.work_mem_test
WHERE id BETWEEN 10000 AND 100000
OR product_id BETWEEN 100 AND 200
ORDER BY value NULLS FIRST;

---------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=613087.41..614622.07 rows=613866 width=27) (actual time=3186.918..3309.896 rows=589379 loops=1)
   Output: id, value, product_id, effective_date
   Sort Key: work_mem_test.value NULLS FIRST
   Sort Method: quicksort  Memory: 61169kB
   Buffers: shared hit=6 read=363702
   I/O Timings: read=1306.892
   ->  Bitmap Heap Scan on public.work_mem_test  (cost=7805.12..554071.67 rows=613866 width=27) (actual time=245.348..2908.344 rows=589379 loops=1)
         Output: id, value, product_id, effective_date
         Recheck Cond: (((work_mem_test.id >= 10000) AND (work_mem_test.id = 100) AND (work_mem_test.product_id   BitmapOr  (cost=7805.12..7805.12 rows=614306 width=0) (actual time=115.051..115.053 rows=0 loops=1)
               Buffers: shared hit=6 read=669
               I/O Timings: read=3.561
               ->  Bitmap Index Scan on work_mem_test_pkey  (cost=0.00..1280.95 rows=82638 width=0) (actual time=6.160..6.161 rows=90001 loops=1)
                     Index Cond: ((work_mem_test.id >= 10000) AND (work_mem_test.id   Bitmap Index Scan on prod_value_idx  (cost=0.00..6217.24 rows=531667 width=0) (actual time=108.889..108.889 rows=499851 loops=1)
                     Index Cond: ((work_mem_test.product_id >= 100) AND (work_mem_test.product_id <= 200))
                     Buffers: shared hit=3 read=422
                     I/O Timings: read=2.231
 Query Identifier: 731698457235411789
 Planning:
   Buffers: shared hit=30
 Planning Time: 0.180 ms
 Execution Time: 3347.271 ms
(28 rows)

Time: 3431.188 ms (00:03.431)

With that adjustment, we have significantly increased the efficiency and performance of the query. From the beginning, just by tuning “work_mem”, we have shaved approximately 13.5 seconds of processing time!

What about a top-N Heapsort??

Now if we want to demonstrate a top-N Heapsort, we can change the query just a little bit more:

set session work_mem to '64MB';
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
SELECT * FROM public.work_mem_test
WHERE id BETWEEN 10000 AND 100000
OR product_id BETWEEN 100 AND 200
ORDER BY value NULLS FIRST LIMIT 10;

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=567337.09..567337.12 rows=10 width=27) (actual time=3021.185..3021.190 rows=10 loops=1)
   Output: id, value, product_id, effective_date
   Buffers: shared hit=6 read=363702
   I/O Timings: read=1313.044
   ->  Sort  (cost=567337.09..568871.76 rows=613866 width=27) (actual time=3021.183..3021.186 rows=10 loops=1)
         Output: id, value, product_id, effective_date
         Sort Key: work_mem_test.value NULLS FIRST
         Sort Method: top-N heapsort  Memory: 26kB
         Buffers: shared hit=6 read=363702
         I/O Timings: read=1313.044
         ->  Bitmap Heap Scan on public.work_mem_test  (cost=7805.12..554071.67 rows=613866 width=27) (actual time=235.909..2911.978 rows=589379 loops=1)
               Output: id, value, product_id, effective_date
               Recheck Cond: (((work_mem_test.id >= 10000) AND (work_mem_test.id = 100) AND (work_mem_test.product_id   BitmapOr  (cost=7805.12..7805.12 rows=614306 width=0) (actual time=108.429..108.431 rows=0 loops=1)
                     Buffers: shared hit=6 read=669
                     I/O Timings: read=3.114
                     ->  Bitmap Index Scan on work_mem_test_pkey  (cost=0.00..1280.95 rows=82638 width=0) (actual time=5.582..5.582 rows=90001 loops=1)
                           Index Cond: ((work_mem_test.id >= 10000) AND (work_mem_test.id   Bitmap Index Scan on prod_value_idx  (cost=0.00..6217.24 rows=531667 width=0) (actual time=102.845..102.845 rows=499851 loops=1)
                           Index Cond: ((work_mem_test.product_id >= 100) AND (work_mem_test.product_id <= 200))
                           Buffers: shared hit=3 read=422
                           I/O Timings: read=2.037
 Query Identifier: 4969544646514690020
 Planning:
   Buffers: shared hit=30
 Planning Time: 0.177 ms
 Execution Time: 3023.304 ms
(32 rows)

Time: 3107.421 ms (00:03.107)

Because we are only returning the top N rows, the memory used is not as high because a different sort methodology can be used. In addition, the time is further reduced.

As you can see with, a little tuning of the “work_mem” parameter, lots of performance can be gained in the system. In this example, we have increased “work_mem” a fairly small amount from 64kb to 64MB. In my mind you never want to increase “work_mem” to a setting where if all workers were being worked by CPU, you could overrun the free memory on the system. Also, remember that there is some overhead to maintaining that memory so it’s really important to find a good balance for your workload. Keep in mind that you can set this parameter at the server level, as an alter in the query text or at the user level as a profile.

Enjoy!

Migration of a Single PDB to a Different Endian Platform – Is it Possible?

Seeing that I do lots of migrations throughout the course of my job, its always good to explore some of the newer features so that you are familiar with them and feel good about putting them into your tool-bag for future use. One of the things that I could see being useful is the ability to migrate a single PDB from one platform to the other. Upon researching the possibilities, I found that migrating and converting the endianness of a PDB really wasn’t documented. What was documented was a process to migrate a PDB from one platform to another when endianness is the same between platforms.

Given that, I wanted to see if the process for converting an entire container using transportable tablespaces could be applied to a single PDB. Unfortunately there is downtime required, but here is what I found:

Create Backup of Source PDB

Determine for a given user what tablespaces are impacted, check for any violations which may occur due to transport, set those tablespaces to read-only and then invoke RMAN by connecting to the PDB and take a backup:

SQL> select * from v$transportable_platform;
PLATFORM_ID PLATFORM_NAME                            ENDIAN_FORMAT      CON_ID
----------- ---------------------------------------- -------------- ----------
          2 Solaris[tm] OE (64-bit)                  Big                     0

SQL> exec SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list => 'HR_TEST', incl_constraints => TRUE);

PL/SQL procedure successfully completed.

SQL> SELECT * FROM transport_set_violations;

no rows selected


export NLS_DATE_FORMAT='DD-MON-YY HH24:MI:SS'
rman target sys@xttpdb

RMAN> alter tablespace hr_test read only;

using target database control file instead of recovery catalog
Statement processed

RMAN> alter tablespace users read only;

Statement processed

RUN
{
ALLOCATE CHANNEL XPLATFORM_BKUP_DISK1 DEVICE TYPE DISK;
ALLOCATE CHANNEL XPLATFORM_BKUP_DISK2 DEVICE TYPE DISK;
BACKUP TO PLATFORM 'Linux x86 64-bit' AS COMPRESSED BACKUPSET FILESPERSET 1
   FORMAT '/export/home/oracle/backups/%d_xplat_bs%s_piece%p_%T_%U.bkp'
   DATAPUMP FORMAT '/export/home/oracle/backups/transport_tblsp_hr_test_meta.dmp'
   TABLESPACE HR_TEST, USERS;
RELEASE CHANNEL XPLATFORM_BKUP_DISK1;
RELEASE CHANNEL XPLATFORM_BKUP_DISK2;
}

allocated channel: XPLATFORM_BKUP_DISK1
channel XPLATFORM_BKUP_DISK1: SID=752 instance=xttsun1 device type=DISK

allocated channel: XPLATFORM_BKUP_DISK2
channel XPLATFORM_BKUP_DISK2: SID=745 instance=xttsun1 device type=DISK

Starting backup at 29-OCT-20
Running TRANSPORT_SET_CHECK on specified tablespaces
TRANSPORT_SET_CHECK completed successfully

Performing export of metadata for specified tablespaces...
   EXPDP> Starting "SYS"."TRANSPORT_EXP_XTTSUN_aahC":
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/COMMENT
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TRIGGER
   EXPDP> Master table "SYS"."TRANSPORT_EXP_XTTSUN_aahC" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TRANSPORT_EXP_XTTSUN_aahC is:
   EXPDP>   /u01/app/oracle/product/12.2.0/dbhome_1/dbs/backup_tts_XTTSUN_66977.dmp
   EXPDP> ******************************************************************************
   EXPDP> Datafiles required for transportable tablespace HR_TEST:
   EXPDP>   +DATAC1/XTTSUN/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/hr_test.302.1054902701
   EXPDP> Datafiles required for transportable tablespace USERS:
   EXPDP>   +DATAC1/XTTSUN/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/users.435.1054891719
   EXPDP> Job "SYS"."TRANSPORT_EXP_XTTSUN_aahC" successfully completed at Thu Oct 29 13:14:42 2020 elapsed 0 00:02:17
Export completed

channel XPLATFORM_BKUP_DISK1: starting compressed full datafile backup set
channel XPLATFORM_BKUP_DISK1: specifying datafile(s) in backup set
input datafile file number=00018 name=+DATAC1/XTTSUN/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/hr_test.302.1054902701
channel XPLATFORM_BKUP_DISK1: starting piece 1 at 29-OCT-20
channel XPLATFORM_BKUP_DISK2: starting compressed full datafile backup set
channel XPLATFORM_BKUP_DISK2: specifying datafile(s) in backup set
input datafile file number=00017 name=+DATAC1/XTTSUN/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/users.435.1054891719
channel XPLATFORM_BKUP_DISK2: starting piece 1 at 29-OCT-20
channel XPLATFORM_BKUP_DISK2: finished piece 1 at 29-OCT-20
piece handle=/export/home/oracle/backups/XTTSUN_xplat_bs36_piece1_20201029_14ve6dm4_1_1.bkp tag=TAG20201029T131156 comment=NONE
channel XPLATFORM_BKUP_DISK2: backup set complete, elapsed time: 00:00:03
channel XPLATFORM_BKUP_DISK1: finished piece 1 at 29-OCT-20
piece handle=/export/home/oracle/backups/XTTSUN_xplat_bs35_piece1_20201029_13ve6dm4_1_1.bkp tag=TAG20201029T131156 comment=NONE
channel XPLATFORM_BKUP_DISK1: backup set complete, elapsed time: 00:00:35
channel XPLATFORM_BKUP_DISK1: starting compressed full datafile backup set
input Data Pump dump file=/u01/app/oracle/product/12.2.0/dbhome_1/dbs/backup_tts_XTTSUN_66977.dmp
channel XPLATFORM_BKUP_DISK1: starting piece 1 at 29-OCT-20
channel XPLATFORM_BKUP_DISK1: finished piece 1 at 29-OCT-20
piece handle=/export/home/oracle/backups/transport_tblsp_hr_test_meta.dmp tag=TAG20201029T131156 comment=NONE
channel XPLATFORM_BKUP_DISK1: backup set complete, elapsed time: 00:00:01
Finished backup at 29-OCT-20

released channel: XPLATFORM_BKUP_DISK1

released channel: XPLATFORM_BKUP_DISK2

RMAN> exit

Copy Backup Files and Metadata File to New Platform

Using normal methods such as scp, copy the files generated by RMAN to the new platform. One caveat, is that until 19c, the data pump file for the metadata is copied to $ORACLE_HOME/dbs so you must make sure that you have sufficient space there for the process to complete successfully.

Restore / Convert Endian on New Platform

The last step in the process is the setup of the new PDB, creation of the user in the PDB and then use RMAN to transport the table spaces. The general process can be done simply by executing the following steps:

  • Create empty pluggable database and create target user in the new PDB:
SQL> select * from v$transportable_platform;
PLATFORM_ID PLATFORM_NAME                            ENDIAN_FORMAT      CON_ID
----------- ---------------------------------------- -------------- ----------
         13 Linux x86 64-bit                         Little                  0

SQL> CREATE PLUGGABLE DATABASE xttpdb ADMIN USER pdb_admin IDENTIFIED BY Welcome123 CREATE_FILE_DEST='+DATA';

Pluggable database created.

SQL> CREATE USER "HR" IDENTIFIED BY VALUES 'S:904B950378D93630E729A1A3051796718B7511BA10F5273F2F031E5CF76D;T:22D35135BC49EEAF5F4C896F26E2E8802D869C6451F39241BB4ACB76A38B7396E1C516DC78F0AFB1D3A96ABB761543673ABCF540CAC36927564F892FC148B436934AD38AD6D35C1EB8F3BB55F70BB02A'
DEFAULT TABLESPACE "USERS"
TEMPORARY TABLESPACE "TEMP"
PASSWORD EXPIRE
ACCOUNT LOCK;

SQL> GRANT "RESOURCE" TO "HR";

Grant succeeded.
SQL> GRANT CREATE SESSION TO "HR";

Grant succeeded.
SQL> GRANT ALTER SESSION TO "HR";

Grant succeeded.
SQL> GRANT UNLIMITED TABLESPACE TO "HR";

Grant succeeded.
SQL> GRANT CREATE SYNONYM TO "HR";

Grant succeeded.
SQL> GRANT CREATE VIEW TO "HR";

Grant succeeded.
SQL> GRANT CREATE SEQUENCE TO "HR";

Grant succeeded.
SQL> GRANT CREATE DATABASE LINK TO "HR";

Grant succeeded.
SQL> GRANT EXECUTE ON "SYS"."DBMS_STATS" TO "HR";

Grant succeeded.
SQL> ALTER USER "HR" DEFAULT ROLE ALL;

Grant succeeded.

  • Invoke RMAN and connect to the target PDB:

export NLS_DATE_FORMAT='DD-MON-YY HH24:MI:SS'
rman target sys@xttpdb

RUN
{
ALLOCATE CHANNEL XPLATFORM_BKUP_DISK1 DEVICE TYPE DISK;
ALLOCATE CHANNEL XPLATFORM_BKUP_DISK2 DEVICE TYPE DISK;
RESTORE
   ALL FOREIGN DATAFILES TO NEW
   FROM BACKUPSET '/archive/xttlnux/XTTSUN_xplat_bs36_piece1_20201029_14ve6dm4_1_1.bkp'
   BACKUPSET '/archive/xttlnux/XTTSUN_xplat_bs35_piece1_20201029_13ve6dm4_1_1.bkp'
   DUMP FILE FROM BACKUPSET '/archive/xttlnux/transport_tblsp_hr_test_meta.dmp';
RELEASE CHANNEL XPLATFORM_BKUP_DISK1;
RELEASE CHANNEL XPLATFORM_BKUP_DISK2;
}

using target database control file instead of recovery catalog
allocated channel: XPLATFORM_BKUP_DISK1
channel XPLATFORM_BKUP_DISK1: SID=512 instance=xttlnux2 device type=DISK

allocated channel: XPLATFORM_BKUP_DISK2
channel XPLATFORM_BKUP_DISK2: SID=615 instance=xttlnux1 device type=DISK

Starting restore at 29-OCT-20

channel XPLATFORM_BKUP_DISK1: starting datafile backup set restore
channel XPLATFORM_BKUP_DISK1: specifying datafile(s) to restore from backup set
channel XPLATFORM_BKUP_DISK1: restoring all foreign files in backup piece
channel XPLATFORM_BKUP_DISK1: reading from backup piece /archive/xttlnux/XTTSUN_xplat_bs36_piece1_20201029_14ve6dm4_1_1.bkp
channel XPLATFORM_BKUP_DISK2: starting datafile backup set restore
channel XPLATFORM_BKUP_DISK2: specifying datafile(s) to restore from backup set
channel XPLATFORM_BKUP_DISK2: restoring all foreign files in backup piece
channel XPLATFORM_BKUP_DISK2: reading from backup piece /archive/xttlnux/XTTSUN_xplat_bs35_piece1_20201029_13ve6dm4_1_1.bkp
channel XPLATFORM_BKUP_DISK1: restoring foreign file 17 to +DATA/XTTLNUX/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/users.929.1055079821
channel XPLATFORM_BKUP_DISK2: restoring foreign file 18 to +DATA/XTTLNUX/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/hr_test.928.1055079877
channel XPLATFORM_BKUP_DISK1: foreign piece handle=/archive/xttlnux/XTTSUN_xplat_bs36_piece1_20201029_14ve6dm4_1_1.bkp
channel XPLATFORM_BKUP_DISK1: restored backup piece 1
channel XPLATFORM_BKUP_DISK1: restore complete, elapsed time: 00:00:02
channel XPLATFORM_BKUP_DISK2: foreign piece handle=/archive/xttlnux/XTTSUN_xplat_bs35_piece1_20201029_13ve6dm4_1_1.bkp
channel XPLATFORM_BKUP_DISK2: restored backup piece 1
channel XPLATFORM_BKUP_DISK2: restore complete, elapsed time: 00:00:26
channel XPLATFORM_BKUP_DISK1: starting datafile backup set restore
channel XPLATFORM_BKUP_DISK1: specifying datafile(s) to restore from backup set
channel XPLATFORM_BKUP_DISK1: restoring Data Pump dump file to /u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/backup_tts_XTTLNUX_32652.dmp
channel XPLATFORM_BKUP_DISK1: reading from backup piece /archive/xttlnux/transport_tblsp_hr_test_meta.dmp
channel XPLATFORM_BKUP_DISK1: foreign piece handle=/archive/xttlnux/transport_tblsp_hr_test_meta.dmp
channel XPLATFORM_BKUP_DISK1: restored backup piece 1
channel XPLATFORM_BKUP_DISK1: restore complete, elapsed time: 00:00:00

Performing import of metadata...
   IMPDP> Master table "SYS"."TSPITR_IMP_XTTLNUX_CFfw" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_XTTLNUX_CFfw":
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/COMMENT
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TRIGGER
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   IMPDP> ORA-39082: Object type TRIGGER:"HR"."SECURE_EMPLOYEES" created with compilation warnings

ORA-39082: Object type TRIGGER:"HR"."UPDATE_JOB_HISTORY" created with compilation warnings

   IMPDP> Job "SYS"."TSPITR_IMP_XTTLNUX_CFfw" completed with 2 error(s) at Thu Oct 29 13:44:46 2020 elapsed 0 00:00:32
Import completed

Finished restore at 29-OCT-20

released channel: XPLATFORM_BKUP_DISK1

released channel: XPLATFORM_BKUP_DISK2

RMAN> exit

At this point, the tablespaces have been converted and plugged into the target PDB and all objects are ready for use!

Interesting Items of Note

Interestingly, after performing the above conversion and migration without any issues, and as I normally do, I started to poke around to see what “really” happened. When I did so I found something quite interesting. Upon looking at “v$datafile” I found that while RMAN did assign OMF filenames at restoration time, it assigned a totally different directory to those files than all of the other files in the PDB.

SQL> select file#, name from v$datafile;

     FILE# NAME
---------- ----------------------------------------------------------------------------
	29 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/system.932.1055165099
	30 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/sysaux.927.1055165099
	31 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/undotbs1.931.1055165099
	32 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/undo_2.925.1055165403
	33 +DATA/XTTLNUX/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/hr_test.919.1055165973
	34 +DATA/XTTLNUX/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/users.922.1055166029

As you may have noticed, there are 2 uuids present in ASM for the same PDB, “B2A8DA5A8E2A777EE0540010E05D13EA” and “B2E8839B69346990E05302EC090A3057”. Upon looking at the source system, it appears that despite specifying “TO NEW” in my RMAN RUN block, it still carried over the uuid from the old system. Of course there is nothing wrong with this other than files not being in the same directory and potentially causing an administration problem, but if you would like to fix it, normal methods apply one of which is “alter database move datafile ….”. The results of which can be seen below:

SQL> alter database move datafile 33;

Database altered.

Elapsed: 00:00:08.40	
SQL> alter database move datafile 34;

Database altered.

Elapsed: 00:00:00.15
SQL> select file#, name from v$datafile;

     FILE# NAME
---------- -----------------------------------------------------------------------------
	29 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/system.932.1055165099
	30 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/sysaux.927.1055165099
	31 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/undotbs1.931.1055165099
	32 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/undo_2.925.1055165403
	33 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/hr_test.957.1055166369
	34 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/users.919.1055166403

6 rows selected.

As you can see, all of the files now reside within ASM under the same UUID and are also still OMF. Hopefully this will help you decide if this is an appropriate migration method to get you to PDB on a different platform!

Upgrading Oracle using autoupgrade.jar and Migration From NON-CDB to CDB Architecture

Over the years, I have done many migrations and many upgrades to Oracle databases. Throughout the versions, that methodology has varied slightly and in all cases has been painstakingly manual until now. Oracle’s “autoupgrade.jar” changes all of that.

“autoupgrade.jar” takes all of the following tasks you used to do manually and automates most of it:

  • prechecks
  • prefixups
  • upgrade
  • migration from non-cdb to cdb
  • postfixups

Periodically, Oracle will make updates and changes to the tool. The latest version and notes for autoupgrade can be obtained from this note: 2485457.1 – AutoUpgrade Tool. Prior to your own testing, ensure that you have reviewed this note and have downloaded the latest version. Also Mike Dietrich has an entire blog dedicated to upgrades and the tool itself.

https://mikedietrichde.com/2019/04/29/the-new-autoupgrade-utility-in-oracle-19c/

Creating a configuration file:

The basis of this tool is a configuration file where parameters are contained that describe the “source” and the “target” information. You can generate a sample configuration file as a starting point by issuing the following command:

java -jar ./autoupgrade.jar -create_sample_file config

A full explanation of all the parameters allowable within a configuration file are detailed in within the documentation:

https://docs.oracle.com/en/database/oracle/oracle-database/19/upgrd/autoupgrade-utility-configuration-files.html#GUID-6AF95954-DAD4-47E2-9318-5EB35631F83D

For example, if you are upgrading a non-cdb to non-cdb, the contents may contain the following parameters:

upg1.dbname=testdb
upg1.start_time=NOW
upg1.source_home=/u01/app/oracle/product/11.2.0/dbhome_1
upg1.target_home=/u01/app/oracle/product/19.3.0/dbhome_1
upg1.sid=testdb1
upg1.log_dir=/home/oracle/upg_logs/testdb1
upg1.upgrade_node=orcl-vbox2
upg1.target_version=19
upg1.run_utlrp=yes
upg1.timezone_upg=yes

In cases where you are upgrading from non-cdb to cdb, there are a few additional parameters:

upg1.dbname=testdb
upg1.start_time=NOW
upg1.source_home=/u01/app/oracle/product/11.2.0/dbhome_1
upg1.target_home=/u01/app/oracle/product/19.3.0/dbhome_1
upg1.sid=testdb1
upg1.log_dir=/home/oracle/upg_logs/testdb1
upg1.upgrade_node=orcl-vbox2
upg1.target_version=19
upg1.run_utlrp=yes
upg1.timezone_upg=yes
upg1.target_cdb=vboxdb1
upg1.target_pdb_copy_option=file_name_convert=('/u02/oradata/TESTDB/datafile/o1_mf_example_hrfz1tct_.dbf','+DATA','/u02/oradata/TESTDB/datafile/o1_mf_sysaux_hrfyw505_.dbf','+DATA','/u02/oradata/TESTDB/datafile/o1_mf_system_hrfyw480_.dbf','+DATA','/u02/oradata/TESTDB/datafile/o1_mf_temp_hrfz1omf_.tmp','+DATA','/u02/oradata/TESTDB/datafile/o1_mf_undotbs1_hrfyw5f9_.dbf','+DATA','/u02/oradata/TESTDB/datafile/o1_mf_users_hrfyw5hh_.dbf','+DATA')
upg1.target_pdb_name=testdb

Notice that when also converting to cdb, at a minimum the following parameters are required:

  • target_cdb
  • target_pdb_name
  • target_pdb_copy_option (optional, if not specified, but the previous two parameters are, then the PDB is created with the NOCOPY option)

One thing to keep in mind that when you utilize the “target_pdb_copy_option=file_name_convert” parameter, you must list every datafile much like when cloning a database. Unfortunately, at the time of this writing, there is no option such as “SET NEWNAME FOR DATABASE” like exists within the RMAN DUPLICATE process.

To help facilitate populating this parameter, this SQL may help:

SELECT
    LISTAGG(
        chr(39)||file_name||chr(39)||','||chr(39)||'+DATA'||chr(39)||','
    ) WITHIN GROUP(
    ORDER BY
        file_id
    ) AS file_name_convert
FROM
    dba_data_files
GROUP BY
    file_name
union
SELECT
    LISTAGG(
        chr(39)||file_name||chr(39)||','||chr(39)||'+DATA'||chr(39)||','
    ) WITHIN GROUP(
    ORDER BY
        file_id
    ) AS file_name_convert
FROM
    dba_temp_files
GROUP BY
    file_name;
FILE_NAME_CONVERT
‘/u02/oradata/TESTDB/datafile/o1_mf_example_hrfz1tct_.dbf’,’+DATA’,
‘/u02/oradata/TESTDB/datafile/o1_mf_sysaux_hrfyw505_.dbf’,’+DATA’,
‘/u02/oradata/TESTDB/datafile/o1_mf_system_hrfyw480_.dbf’,’+DATA’,
‘/u02/oradata/TESTDB/datafile/o1_mf_temp_hrfz1omf_.tmp’,’+DATA’,
‘/u02/oradata/TESTDB/datafile/o1_mf_undotbs1_hrfyw5f9_.dbf’,’+DATA’,
‘/u02/oradata/TESTDB/datafile/o1_mf_users_hrfyw5hh_.dbf’,’+DATA’,

With this output you can plug it into the parameter “target_pdb_copy_option” as shown within the sample configuration file.

*** Update on 10/15/2020
Shortly after posting this post, I was made aware that the option also exists to leverage the target database “db_create_file_dest” parameter to also perform the conversion to the correct OMF location. If you wish to use this methodology instead of listing every file in the “target_pdb_copy_option=file_name_convert=” parameter, instead populate this parameter as follows:

target_pdb_copy_option=file_name_convert=NONE

Executing pre-checks:

Now that the configuration file is ready to be used, you can execute the pre-checks and then either decide to fix them yourself or allow the tool to fix them for you. In either case, you should always re-execute the pre-checks after fixing them to ensure that everything has been addressed. To begin the pre-check process, execute the following command:

java -jar autoupgrade.jar -config testdb_autoupgrade_config.cfg -mode analyze

This will take the source database and create a log file showing all of the items that need to be fixed and whether or not the tool can fix them automatically or if they need to be fixed manually. The log file is created in the log directory specified in the “log_dir” parameter within the configuration file and then within the directory that corresponds to the task number of the “analyze” task.

Executing fixups:

Once the pre-checks are completed, the script can then be executed to attempt to fix things that it can fix. You can then execute the pre-checks one more time to ensure that all issues have been addressed. To execute the fixups, execute the following command:

java -jar autoupgrade.jar -config testdb_autoupgrade_config.cfg -mode fixups

Deploying the upgrade:

Once all of the fixups are completed, you are ready to upgrade. The upgrade is quite easy using the tool:

java -jar autoupgrade.jar -config testdb_autoupgrade_config.cfg -mode deploy

Finalizing the upgrade:

As with any upgrade, paying attention to the log files is very important. That being said, a few items that I noticed when testing on my test server:

  • oratab is not modified if you are migrating from non-cdb to cdb
  • clusterware is not updated if you are migrating from non-cdb to cdb
  • if migrating from non-cdb to cdb, an existing cdb database must be specified or one must be pre-created
  • be sure to check for any Guaranteed Restore Points that might be created
  • check for the proper setting of the ‘compatible’ parameter and advance it at the appropriate time

Also, the tool does so much more, and this is just the tip of the iceberg. I plan to investigate more of these options such as automated patching and fleet patching in subsequent posts.

Oracle 12 Non-CDB to PDB Migration Methods

Continuing with trying to finish additional blog posts which have been on my list for a while, is a review of the methods of migrating a Non-Container Database (Non-CDB) to a Container Database (CDB) with Pluggables.

I wish this was easier and Oracle gave you a complete “in-place” method to do this, but the only way to get a Non-CDB (pre-12c database) to a CDB is to create a new database and migrate the Non-CDB into the new CDB as a PDB.

I will make the assumption that the creation of the new CDB is complete. So lets look at some methods of creating / migrating the PDB and their pros and cons:

In either case, you need to start with creating the XML file which will describe the new PDB:

On the Non-CDB:

	SHUTDOWN IMMEDIATE;
	startup mount exclusive;
	alter database open read only;

Create the PDB Describe XML File:

     BEGIN
       DBMS_PDB.DESCRIBE(
          pdb_descr_file => '/export/home/oracle/nonCDBToPDB.xml');
     END;
     /
     shutdown immediate;

Verify the XML File on the new CDB:

     SET SERVEROUTPUT ON
     DECLARE
         hold_var boolean;
     begin
          hold_var := DBMS_PDB.CHECK_PLUG_COMPATIBILITY(pdb_descr_file=>'/export/home/oracle/nonCDBToPDB.xml');
     if hold_var then
          dbms_output.put_line('YES');
     else
          dbms_output.put_line('NO');
     end if;
     end;
     /

Check for errors:

   set lines 300
   col cause for a25
   col message for a150
   set pagesize 9999
   select name,cause,action,type,message,status from PDB_PLUG_IN_VIOLATIONS;

Before you can proceed, all errors contained in PDB_PLUG_IN_VIOLATIONS must be resolved. Next are 2 out of the 3 methods to migrate an Non-CDB to PDB. I will leave the “COPY” method out of this post as it is not feasible to move any of the databases I deal with on a day to day basis using ‘COPY’.

Create the PDB using “NOCOPY”. While “NOCOPY” is the fastest, it could be the most problematic long term because this function leaves all datafiles where they came from and since the new CDB is likely to be on the same host, the naming differences could be confusing at some point. Nonetheless, for demonstration, the command is quite easy:

     CREATE PLUGGABLE DATABASE devpdb USING '/export/home/oracle/nonCDBToPDB.xml' NOCOPY TEMPFILE REUSE;

Based on my testing, the method I liked the most was the ‘MOVE’ option. To some, this may seem invasive, but for my environments this was the best option because the new file names are also corrected to contain the correct OMF path names based on the new CDB. While this method wasn’t as fast as NOCOPY, in my 18TB environment with 1200 datafiles, this command finished in just over 30 minutes. Pretty acceptable in my book:

     CREATE PLUGGABLE DATABASE devpdb USING '/export/home/oracle/nonCDBToPDB.xml' MOVE TEMPFILE REUSE;

Finishing off the migration to the new PDB is the same regardless of the migration method:

Verify the PDB:

	select name,guid, open_mode from v$pdbs;
	col pdb_name for a15
	select pdb_name, status from dba_pdbs;

Clean up the PDB. This by far was one of the longest operations of the whole conversion:

	alter session set container=devpdb;
	$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql

Check for errors:

	set lines 300
	col cause for a25
	col message for a50
	col action for a100
	set pagesize 9999
	select name,cause,action,type,message,status from PDB_PLUG_IN_VIOLATIONS;

In my case, the only remaining violations that I had were some orphan database services that were stuck in the metadata. To clean this up you can execute:

	alter session set container=;
	
	select SERVICE_ID,NAME,NETWORK_NAME,PDB FROM CDB_SERVICES ORDER BY PDB,SERVICE_ID;
	select SERVICE_ID,NAME,NETWORK_NAME FROM DBA_SERVICES ORDER BY SERVICE_ID;
	
	exec dbms_service.delete_service('')

Ensure that all pluggables are open and open on restart:

	alter pluggable database all open;
	alter pluggable database all save state;

As with anything in our business, adequate testing goes a long way and these were my observations in my environment. If your experience varies, I sure would like to hear about it.