Category Archives: Uncategorized

Why is My App Table Scanning in PostgreSQL but not Oracle?

My team and I have been working on a lot of migrations off of Oracle and onto Google CloudSQL for Postgres / AlloyDB lately. One of the common things that I have been seeing in my performance tuning / migration activities is that Oracle handles certain application datatypes differently than PostgreSQL. For my most recent client, they used the “float” datatype extensively in their Java code and unfortunately, the PostgreSQL JDBC driver doesn’t convert that datatype like the Oracle JDBC driver does. The result of the datatype mismatch ends up as a full table scan in PostgreSQL whereas in Oracle it was using an index.

*** Note: This client did not use an ORM within their code. While I still need to test it, I am hopeful that this same issue will not manifest itself when using an ORM like SQLAlchemy (Python) or Hibernate (Java).

While in Oracle numbers are usually stored in the numeric datatype, you have many options within PostgreSQL to do the same thing:

  • numeric (x)
  • numeric (x,y)
  • numeric
  • smallint
  • bigint
  • int

Each serve a purpose and should be used with careful analysis. That said, don’t forget about the code! To demonstrate why, I modified a simple Java test harness I usually use to test connectivity / check SSL Encryption functionality during migrations to show what happens if the datatypes are not looked at carefully.

As you can see, in Oracle, the JDBC driver will convert any of the following types to “numeric“, the same query plan is also achieved using index range scans and no table scans. The output is below and Oracle Code is at the bottom of the post:

java -classpath /Users/shaneborden/Documents/java/ojdbc11.jar:/Users/shaneborden/Documents/java OracleJdbcTest
Password: *******
=====  Database info =====
   DatabaseProductName: Oracle
   DatabaseProductVersion: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.15.0.0.0
   DatabaseMajorVersion: 19
   DatabaseMinorVersion: 0
=====  Driver info =====
   DriverName: Oracle JDBC driver
   DriverVersion: 21.8.0.0.0
   DriverMajorVersion: 21
   DriverMinorVersion: 8
=====  JDBC/DB attributes =====
   Supports getGeneratedKeys(): true
===== Database info =====


===== Query Plan - Cast Int to Numeric =====
   Plan hash value: 423740054

   -----------------------------------------------------------------------------------------------------------
   | Id  | Operation                           | Name                | Rows  | Bytes | Cost (%CPU)| Time     |
   -----------------------------------------------------------------------------------------------------------
   |   0 | SELECT STATEMENT                    |                     |     2 |    28 |     5   (0)| 00:00:01 |
   |   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| DATATYPE_TEST       |     2 |    28 |     5   (0)| 00:00:01 |
   |*  2 |   INDEX RANGE SCAN                  | DATATYPE_NUMBER_VAL |     2 |       |     3   (0)| 00:00:01 |
   -----------------------------------------------------------------------------------------------------------

   Predicate Information (identified by operation id):
   ---------------------------------------------------

      2 - access("NUMBER_VAL"=:1)


===== Query Plan - Cast Long to Numeric =====
   Plan hash value: 423740054

   -----------------------------------------------------------------------------------------------------------
   | Id  | Operation                           | Name                | Rows  | Bytes | Cost (%CPU)| Time     |
   -----------------------------------------------------------------------------------------------------------
   |   0 | SELECT STATEMENT                    |                     |     2 |    28 |     5   (0)| 00:00:01 |
   |   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| DATATYPE_TEST       |     2 |    28 |     5   (0)| 00:00:01 |
   |*  2 |   INDEX RANGE SCAN                  | DATATYPE_NUMBER_VAL |     2 |       |     3   (0)| 00:00:01 |
   -----------------------------------------------------------------------------------------------------------

   Predicate Information (identified by operation id):
   ---------------------------------------------------

      2 - access("NUMBER_VAL"=:1)


===== Query Plan - Cast Float to Numeric =====
   Plan hash value: 423740054

   -----------------------------------------------------------------------------------------------------------
   | Id  | Operation                           | Name                | Rows  | Bytes | Cost (%CPU)| Time     |
   -----------------------------------------------------------------------------------------------------------
   |   0 | SELECT STATEMENT                    |                     |     2 |    28 |     5   (0)| 00:00:01 |
   |   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| DATATYPE_TEST       |     2 |    28 |     5   (0)| 00:00:01 |
   |*  2 |   INDEX RANGE SCAN                  | DATATYPE_NUMBER_VAL |     2 |       |     3   (0)| 00:00:01 |
   -----------------------------------------------------------------------------------------------------------

   Predicate Information (identified by operation id):
   ---------------------------------------------------

      2 - access("NUMBER_VAL"=:1)


===== Query Plan - Cast Double to Numeric =====
   Plan hash value: 423740054

   -----------------------------------------------------------------------------------------------------------
   | Id  | Operation                           | Name                | Rows  | Bytes | Cost (%CPU)| Time     |
   -----------------------------------------------------------------------------------------------------------
   |   0 | SELECT STATEMENT                    |                     |     2 |    28 |     5   (0)| 00:00:01 |
   |   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| DATATYPE_TEST       |     2 |    28 |     5   (0)| 00:00:01 |
   |*  2 |   INDEX RANGE SCAN                  | DATATYPE_NUMBER_VAL |     2 |       |     3   (0)| 00:00:01 |
   -----------------------------------------------------------------------------------------------------------

   Predicate Information (identified by operation id):
   ---------------------------------------------------

      2 - access("NUMBER_VAL"=:1)


=========================
Command successfully executed

However in PostgreSQL, the same statement where we convert “int” to “numeric” and “long” to “Numeric” an index scan is executed, however when casting “float” and “double” datatypes to numeric a table scan results. Similar behavior is seen when using the other PostgreSQL datatypes such as “smallint”, “bigint” and “int”. The output is shown below and the PostgreSQL Code is located at the bottom of the post:

java -classpath /Users/shaneborden/Documents/java/postgresql-42.5.0.jar:/Users/shaneborden/Documents/java PostgresJdbcTest
Password: *******
=====  Database info =====
   DatabaseProductName: PostgreSQL
   DatabaseProductVersion: 14.4
   DatabaseMajorVersion: 14
   DatabaseMinorVersion: 4
=====  Driver info =====
   DriverName: PostgreSQL JDBC Driver
   DriverVersion: 42.5.0
   DriverMajorVersion: 42
   DriverMinorVersion: 5
=====  JDBC/DB attributes =====
   Supports getGeneratedKeys(): true
===== Database info =====
   Current Date from Postgres : 2022-12-27 16:25:41.493047-05
   Client connected pid from Postgres : 17727
   Postgres DB Unique Name from Postgres : mytpchdb
   Client connected hostname from Postgres : null
   Client connected application_name from Postgres : PostgreSQL JDBC Driver


===== Query Plan - Cast Int to Numeric =====
   Index Scan using datatype_test_numeric_decimal on datatype_test  (cost=0.43..6.47 rows=2 width=28) (actual time=0.057..0.060 rows=2 loops=1)
     Index Cond: (numeric_val = '10001'::numeric)
   Planning Time: 0.389 ms
   Execution Time: 0.085 ms


===== Query Plan - Cast Long to Numeric =====
   Index Scan using datatype_test_numeric_decimal on datatype_test  (cost=0.43..6.47 rows=2 width=28) (actual time=0.011..0.013 rows=2 loops=1)
     Index Cond: (numeric_val = '10001'::numeric)
   Planning Time: 0.126 ms
   Execution Time: 0.027 ms


===== Query Plan - Cast Float to Numeric =====
   Seq Scan on datatype_test  (cost=0.00..233334.01 rows=50000 width=28) (actual time=1050.733..2622.224 rows=2 loops=1)
     Filter: ((numeric_val)::double precision = '10001'::real)
     Rows Removed by Filter: 9999999
   Planning Time: 0.094 ms
   Execution Time: 2622.273 ms


===== Query Plan - Cast Double to Numeric =====
   Seq Scan on datatype_test  (cost=0.00..233334.01 rows=50000 width=28) (actual time=1055.081..2629.634 rows=2 loops=1)
     Filter: ((numeric_val)::double precision = '10001'::double precision)
     Rows Removed by Filter: 9999999
   Planning Time: 0.096 ms
   Execution Time: 2629.660 ms

As you can see, its very important to also ensure that your datatypes within your code are fully compliant with the destination RDBMS. The “double” and the “float” types within the Java code cause a table scan! While Oracle has become very forgiving with that over the years, PostgreSQL just isn’t there yet and you need to make sure that you adjust your code accordingly!

Code Samples:

Jar / Java Requirements:

  • openjdk 11
  • postgresql-42.5.0.jar
  • ojdbc11.jar

Compile:

To compile the code:
Oracle: javac OracleJdbcTest.java
Postgres: javac PostgresJdbcTest.java

Oracle Java Test Harness:

Note: To execute the code follow the instructions in the comment block at the top of the code. The instructions to create the sample table objects are also contained within the same comment block.

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.Properties;
import java.sql.*;

/*
 *
 * Simple Java Program to connect Oracle database by using Oracle JDBC thin driver
 * Make sure you have Oracle JDBC thin driver in your classpath before running this program
 * @author
 javac OracleJdbcTest.java
 java -classpath /Users/shaneborden/Documents/java/ojdbc11.jar:/Users/shaneborden/Documents/java OracleJdbcTest
 Setup:
 CREATE TABLE tc.datatype_test (
		number_decimal_val number(12,2), 
		number_val number(12),
    random_val number(4))
  TABLESPACE USERS;
 CREATE SEQUENCE tc.datatype_test_seq
  START WITH     1
  INCREMENT BY   1
  NOCACHE
  NOCYCLE;
 BEGIN
  FOR i in 1 .. 1000000
 LOOP
 INSERT INTO tc.datatype_test VALUES ( 
		tc.datatype_test_seq.nextval,
		floor(dbms_random.value(1, 1000000)),
    floor(dbms_random.value(1, 1000)));
 END LOOP;
 END;
 /

CREATE INDEX tc.datatype_number_decimal_val on tc.datatype_test(number_decimal_val);
CREATE INDEX tc.datatype_number_val on tc.datatype_test(number_val);
*/

public class OracleJdbcTest
{
  public static void main(String args[]) throws SQLException, ClassNotFoundException
  {
    try
    {
      java.io.Console console = System.console();
      Boolean dataTypeCheck = true;
      String sourceDatatType = "Numeric";
      String inputPassword = new String(console.readPassword("Password: "));
      Integer intQueryParam = 10001;
      Long longQueryParam = 10001L;
      Float floatQueryParam = 10001f;
      Double doubleQueryParam = 10001.0;

      /**Set URL of Oracle database server*/
      String url = "jdbc:oracle:thin:@//192.168.1.105:1521/vboxncdb.localdomain.com";
      String xPlanSql = "select * from table(dbms_xplan.display)";
       
      /** properties for creating connection to Oracle database */
      Properties props = new Properties();
      props.setProperty("user", "datatypeTestUser");
      props.setProperty("password",  inputPassword);
       
      /** creating connection to Oracle database using JDBC*/
       
      Connection conn = DriverManager.getConnection(url,props);
      DatabaseMetaData dbmd = conn.getMetaData();
       
      System.out.println("=====  Database info =====");
      System.out.println("   DatabaseProductName: " + dbmd.getDatabaseProductName() );
      System.out.println("   DatabaseProductVersion: " + dbmd.getDatabaseProductVersion() );
      System.out.println("   DatabaseMajorVersion: " + dbmd.getDatabaseMajorVersion() );
      System.out.println("   DatabaseMinorVersion: " + dbmd.getDatabaseMinorVersion() );
      System.out.println("=====  Driver info =====");
      System.out.println("   DriverName: " + dbmd.getDriverName() );
      System.out.println("   DriverVersion: " + dbmd.getDriverVersion() );
      System.out.println("   DriverMajorVersion: " + dbmd.getDriverMajorVersion() );
      System.out.println("   DriverMinorVersion: " + dbmd.getDriverMinorVersion() );
      System.out.println("=====  JDBC/DB attributes =====");
      if (dbmd.supportsGetGeneratedKeys() )
        System.out.println("   Supports getGeneratedKeys(): true");
      else
        System.out.println("   Supports getGeneratedKeys(): false");
      System.out.println("===== Database info =====");
       
      String sql = "with session_data as (";
            sql = sql + "select sysdate as current_day,SYS_CONTEXT ('USERENV', 'DB_UNIQUE_NAME') as db_name,SYS_CONTEXT ('USERENV', 'SERVICE_NAME') as service_name, ";
            sql = sql + "SYS_CONTEXT ('USERENV', 'HOST') as host, SYS_CONTEXT ('USERENV', 'IP_ADDRESS') as ip_address,  SYS_CONTEXT('USERENV','SID') sid from dual) ";
            sql = sql + "select sd.current_day, sd.db_name, sd.service_name, sd.host, sd.ip_address, ";
            sql = sql + "sd.sid, nvl(sci.network_service_banner, 'Traffic Not Encrypted') network_service_banner ";
            sql = sql + "from session_data sd ";
            sql = sql + "left join v$session_connect_info sci on (sd.sid = sci.sid) ";
            sql = sql + "where sci.network_service_banner like '%Crypto-checksumming service adapter%'";
       
      /** creating PreparedStatement object to execute query*/
      PreparedStatement preStatement = conn.prepareStatement(sql);
       
      ResultSet result = preStatement.executeQuery();
       
      while(result.next())
      {
        System.out.println("Current Date from Oracle : " +         result.getString("current_day"));
        System.out.println("Oracle DB Unique Name from Oracle : " +         result.getString("db_name"));
        System.out.println("Oracle Connected Listener Service Name from Oracle : " +         result.getString("service_name"));
        System.out.println("Client connected hostname from Oracle : " +         result.getString("host"));
        System.out.println("Client connected ip_address from Oracle : " +         result.getString("ip_address"));
        System.out.println("Client connected encryption info from Oracle : " +         result.getString("network_service_banner"));
      }


      if (dataTypeCheck)
        if (sourceDatatType == "Numeric122") {
          sql = "EXPLAIN PLAN FOR ";
          sql = sql + "select * from tc.datatype_test where number_decimal_val = ?";
        } else if (sourceDatatType == "Numeric") {
          sql = "EXPLAIN PLAN FOR ";
          sql = sql + "select * from tc.datatype_test where number_val = ?";
        }

        System.out.println("");
        System.out.println("");
        System.out.println("===== Query Plan - Cast Int to "+ sourceDatatType +" =====");

        /** creating PreparedStatement object to execute query*/
        preStatement = conn.prepareStatement(sql);

        preStatement.setInt(1, intQueryParam);
          
        result = preStatement.executeQuery();

        PreparedStatement xPlanStatement = conn.prepareStatement(xPlanSql);
        ResultSet xPlanResult = xPlanStatement.executeQuery();

        while(xPlanResult.next())
        {
            System.out.println("   " + xPlanResult.getString(1));
        }

        System.out.println("");
        System.out.println("");
        System.out.println("===== Query Plan - Cast Long to "+ sourceDatatType +" =====");

        preStatement.setLong(1, longQueryParam);
        
        result = preStatement.executeQuery();

        xPlanStatement = conn.prepareStatement(xPlanSql);
        xPlanResult = xPlanStatement.executeQuery();

        while(xPlanResult.next())
        {
            System.out.println("   " + xPlanResult.getString(1));
        }

        System.out.println("");
        System.out.println("");
        System.out.println("===== Query Plan - Cast Float to "+ sourceDatatType +" =====");

        preStatement.setFloat(1, floatQueryParam);
        
        result = preStatement.executeQuery();

        xPlanStatement = conn.prepareStatement(xPlanSql);
        xPlanResult = xPlanStatement.executeQuery();

        while(xPlanResult.next())
        {
            System.out.println("   " + xPlanResult.getString(1));
        }

        System.out.println("");
        System.out.println("");
        System.out.println("===== Query Plan - Cast Double to "+ sourceDatatType +" =====");

        preStatement.setDouble(1, doubleQueryParam);
        
        result = preStatement.executeQuery();

        xPlanStatement = conn.prepareStatement(xPlanSql);
        xPlanResult = xPlanStatement.executeQuery();

        while(xPlanResult.next())
        {
            System.out.println("   " + xPlanResult.getString(1));
        }

        System.out.println("");
        System.out.println("");

      conn.close();
      System.out.println("=========================");
      System.out.println("Command successfully executed");

    }

    catch(SQLException exp) {
	   System.out.println("Exception: " + exp.getMessage());
	   System.out.println("SQL State: " + exp.getSQLState());
	   System.out.println("Vendor Error: " + exp.getErrorCode());
    }
    
  }
}

PostgreSQL Java Test Harness:

Note: To execute the code follow the instructions in the comment block at the top of the code. The instructions to create the sample table objects are also contained within the same comment block.

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.Properties;
import java.sql.*;

/*
 *
 * Simple Java Program to connect Postgres database by using Postgres JDBC thin driver
 * Make sure you have Postgres JDBC thin driver in your classpath before running this program
 * @author
 java -classpath /Users/shaneborden/Documents/java/postgresql-42.5.0.jar:/Users/shaneborden/Documents/java PostgresJdbcTest
 Setup:
 CREATE TABLE datatype_test (
		int_val int, 
		bigint_val bigint, 
		numeric_val numeric(12),
    numeric_decimal_val numeric(12,2), 
		smallint_val smallint);
INSERT INTO datatype_test VALUES ( 
		generate_series(0,10000000), 
		generate_series(0,10000000), 
		floor(random()*10000000),
    random()*10000000, 
		floor(random()* (32765-1 + 1) + 1) );

SET SESSION max_parallel_maintenance_workers TO 4;
SET SESSION maintenance_work_mem TO '2 GB';

CREATE INDEX datatype_test_int on datatype_test(int_val);
CREATE INDEX datatype_test_bigint on datatype_test(bigint_val);
CREATE INDEX datatype_test_numeric on datatype_test(numeric_val);
CREATE INDEX datatype_test_numeric_decimal on datatype_test(numeric_val);
CREATE INDEX datatype_test_smallint on datatype_test(smallint_val);
*/

public class PostgresJdbcTest
{
  public static void main(String args[]) throws SQLException, ClassNotFoundException
  {
    try
    {
       java.io.Console console = System.console();
       Boolean dataTypeCheck = true;
       String sourceDatatType = "Numeric";
       String inputPassword = new String(console.readPassword("Password: "));
       Integer intQueryParam = 10001;
       Long longQueryParam = 10001L;
       Float floatQueryParam = 10001f;
       Double doubleQueryParam = 10001.0;

       /**Set URL of Postgres database server*/
        String url = "jdbc:postgresql://localhost:6000/mytpchdb";
       
       /** properties for creating connection to Postgres database */
       Properties props = new Properties();
       props.setProperty("user", "postgres");
       props.setProperty("password",  inputPassword);
       
       /** creating connection to Postgres database using JDBC*/
       
       Connection conn = DriverManager.getConnection(url,props);
       DatabaseMetaData dbmd = conn.getMetaData();
       
       System.out.println("=====  Database info =====");
       System.out.println("   DatabaseProductName: " + dbmd.getDatabaseProductName() );
       System.out.println("   DatabaseProductVersion: " + dbmd.getDatabaseProductVersion() );
       System.out.println("   DatabaseMajorVersion: " + dbmd.getDatabaseMajorVersion() );
       System.out.println("   DatabaseMinorVersion: " + dbmd.getDatabaseMinorVersion() );
       System.out.println("=====  Driver info =====");
       System.out.println("   DriverName: " + dbmd.getDriverName() );
       System.out.println("   DriverVersion: " + dbmd.getDriverVersion() );
       System.out.println("   DriverMajorVersion: " + dbmd.getDriverMajorVersion() );
       System.out.println("   DriverMinorVersion: " + dbmd.getDriverMinorVersion() );
       System.out.println("=====  JDBC/DB attributes =====");
       if (dbmd.supportsGetGeneratedKeys() )
         System.out.println("   Supports getGeneratedKeys(): true");
       else
         System.out.println("   Supports getGeneratedKeys(): false");
       System.out.println("===== Database info =====");
       
       String sql = "select now() as current_day,current_database() as db_name, ";
              sql = sql + "client_hostname as host, application_name, pid from pg_stat_activity ";
              sql = sql + " where pid = pg_backend_pid() ";
       
       /** creating PreparedStatement object to execute query*/
       PreparedStatement preStatement = conn.prepareStatement(sql);
       
       ResultSet result = preStatement.executeQuery();
       
       while(result.next())
       {
           System.out.println("   Current Date from Postgres : " +         result.getString("current_day"));
           System.out.println("   Client connected pid from Postgres : " +         result.getString("pid"));
           System.out.println("   Postgres DB Unique Name from Postgres : " +         result.getString("db_name"));
           System.out.println("   Client connected hostname from Postgres : " +         result.getString("host"));
           System.out.println("   Client connected application_name from Postgres : " +         result.getString("application_name"));
       }

       if (dataTypeCheck)
          if (sourceDatatType == "Int") {
            sql = "EXPLAIN (ANALYZE, COSTS)";
            sql = sql + "select * from datatype_test where int_val = ?";
          } else if (sourceDatatType == "Bigint") {
            sql = "EXPLAIN (ANALYZE, COSTS)";
            sql = sql + "select * from datatype_test where bigint_val = ?";
          } else if (sourceDatatType == "Numeric") {
            sql = "EXPLAIN (ANALYZE, COSTS)";
            sql = sql + "select * from datatype_test where numeric_val = ?";
          } else if (sourceDatatType == "Numeric122") {
            sql = "EXPLAIN (ANALYZE, COSTS)";
            sql = sql + "select * from datatype_test where numeric_val = ?";
          } else if (sourceDatatType == "Smallint") {
            sql = "EXPLAIN (ANALYZE, COSTS)";
            sql = sql + "select * from datatype_test where smallint_val = ?";
          }

          Statement stmt = conn.createStatement();
          stmt.execute("SET max_parallel_workers_per_gather = 0");

          System.out.println("");
          System.out.println("");
          System.out.println("===== Query Plan - Cast Int to "+ sourceDatatType +" =====");

          /** creating PreparedStatement object to execute query*/
          preStatement = conn.prepareStatement(sql);

          preStatement.setInt(1, intQueryParam);
          
          result = preStatement.executeQuery();
          
          while(result.next())
          {
              System.out.println("   " + result.getString(1));
          }

          System.out.println("");
          System.out.println("");
          System.out.println("===== Query Plan - Cast Long to "+ sourceDatatType +" =====");

          preStatement.setLong(1, longQueryParam);
          
          result = preStatement.executeQuery();
          
          while(result.next())
          {
            System.out.println("   " + result.getString(1));
          }

          System.out.println("");
          System.out.println("");
          System.out.println("===== Query Plan - Cast Float to "+ sourceDatatType +" =====");

          preStatement.setFloat(1, floatQueryParam);
          
          result = preStatement.executeQuery();
          
          while(result.next())
          {
            System.out.println("   " + result.getString(1));
          }

          System.out.println("");
          System.out.println("");
          System.out.println("===== Query Plan - Cast Double to "+ sourceDatatType +" =====");

          preStatement.setDouble(1, doubleQueryParam);
          
          result = preStatement.executeQuery();
          
          while(result.next())
          {
            System.out.println("   " + result.getString(1));
          }

          System.out.println("");
          System.out.println("");

      conn.close();
      System.out.println("=========================");
      System.out.println("Command successfully executed");

    }

    catch(SQLException exp) {
	   System.out.println("Exception: " + exp.getMessage());
	   System.out.println("SQL State: " + exp.getSQLState());
	   System.out.println("Vendor Error: " + exp.getErrorCode());
    }
    
  }
}

Migration of a Single PDB to a Different Endian Platform – Is it Possible?

Seeing that I do lots of migrations throughout the course of my job, its always good to explore some of the newer features so that you are familiar with them and feel good about putting them into your tool-bag for future use. One of the things that I could see being useful is the ability to migrate a single PDB from one platform to the other. Upon researching the possibilities, I found that migrating and converting the endianness of a PDB really wasn’t documented. What was documented was a process to migrate a PDB from one platform to another when endianness is the same between platforms.

Given that, I wanted to see if the process for converting an entire container using transportable tablespaces could be applied to a single PDB. Unfortunately there is downtime required, but here is what I found:

Create Backup of Source PDB

Determine for a given user what tablespaces are impacted, check for any violations which may occur due to transport, set those tablespaces to read-only and then invoke RMAN by connecting to the PDB and take a backup:

SQL> select * from v$transportable_platform;
PLATFORM_ID PLATFORM_NAME                            ENDIAN_FORMAT      CON_ID
----------- ---------------------------------------- -------------- ----------
          2 Solaris[tm] OE (64-bit)                  Big                     0

SQL> exec SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list => 'HR_TEST', incl_constraints => TRUE);

PL/SQL procedure successfully completed.

SQL> SELECT * FROM transport_set_violations;

no rows selected


export NLS_DATE_FORMAT='DD-MON-YY HH24:MI:SS'
rman target sys@xttpdb

RMAN> alter tablespace hr_test read only;

using target database control file instead of recovery catalog
Statement processed

RMAN> alter tablespace users read only;

Statement processed

RUN
{
ALLOCATE CHANNEL XPLATFORM_BKUP_DISK1 DEVICE TYPE DISK;
ALLOCATE CHANNEL XPLATFORM_BKUP_DISK2 DEVICE TYPE DISK;
BACKUP TO PLATFORM 'Linux x86 64-bit' AS COMPRESSED BACKUPSET FILESPERSET 1
   FORMAT '/export/home/oracle/backups/%d_xplat_bs%s_piece%p_%T_%U.bkp'
   DATAPUMP FORMAT '/export/home/oracle/backups/transport_tblsp_hr_test_meta.dmp'
   TABLESPACE HR_TEST, USERS;
RELEASE CHANNEL XPLATFORM_BKUP_DISK1;
RELEASE CHANNEL XPLATFORM_BKUP_DISK2;
}

allocated channel: XPLATFORM_BKUP_DISK1
channel XPLATFORM_BKUP_DISK1: SID=752 instance=xttsun1 device type=DISK

allocated channel: XPLATFORM_BKUP_DISK2
channel XPLATFORM_BKUP_DISK2: SID=745 instance=xttsun1 device type=DISK

Starting backup at 29-OCT-20
Running TRANSPORT_SET_CHECK on specified tablespaces
TRANSPORT_SET_CHECK completed successfully

Performing export of metadata for specified tablespaces...
   EXPDP> Starting "SYS"."TRANSPORT_EXP_XTTSUN_aahC":
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/COMMENT
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TRIGGER
   EXPDP> Master table "SYS"."TRANSPORT_EXP_XTTSUN_aahC" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TRANSPORT_EXP_XTTSUN_aahC is:
   EXPDP>   /u01/app/oracle/product/12.2.0/dbhome_1/dbs/backup_tts_XTTSUN_66977.dmp
   EXPDP> ******************************************************************************
   EXPDP> Datafiles required for transportable tablespace HR_TEST:
   EXPDP>   +DATAC1/XTTSUN/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/hr_test.302.1054902701
   EXPDP> Datafiles required for transportable tablespace USERS:
   EXPDP>   +DATAC1/XTTSUN/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/users.435.1054891719
   EXPDP> Job "SYS"."TRANSPORT_EXP_XTTSUN_aahC" successfully completed at Thu Oct 29 13:14:42 2020 elapsed 0 00:02:17
Export completed

channel XPLATFORM_BKUP_DISK1: starting compressed full datafile backup set
channel XPLATFORM_BKUP_DISK1: specifying datafile(s) in backup set
input datafile file number=00018 name=+DATAC1/XTTSUN/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/hr_test.302.1054902701
channel XPLATFORM_BKUP_DISK1: starting piece 1 at 29-OCT-20
channel XPLATFORM_BKUP_DISK2: starting compressed full datafile backup set
channel XPLATFORM_BKUP_DISK2: specifying datafile(s) in backup set
input datafile file number=00017 name=+DATAC1/XTTSUN/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/users.435.1054891719
channel XPLATFORM_BKUP_DISK2: starting piece 1 at 29-OCT-20
channel XPLATFORM_BKUP_DISK2: finished piece 1 at 29-OCT-20
piece handle=/export/home/oracle/backups/XTTSUN_xplat_bs36_piece1_20201029_14ve6dm4_1_1.bkp tag=TAG20201029T131156 comment=NONE
channel XPLATFORM_BKUP_DISK2: backup set complete, elapsed time: 00:00:03
channel XPLATFORM_BKUP_DISK1: finished piece 1 at 29-OCT-20
piece handle=/export/home/oracle/backups/XTTSUN_xplat_bs35_piece1_20201029_13ve6dm4_1_1.bkp tag=TAG20201029T131156 comment=NONE
channel XPLATFORM_BKUP_DISK1: backup set complete, elapsed time: 00:00:35
channel XPLATFORM_BKUP_DISK1: starting compressed full datafile backup set
input Data Pump dump file=/u01/app/oracle/product/12.2.0/dbhome_1/dbs/backup_tts_XTTSUN_66977.dmp
channel XPLATFORM_BKUP_DISK1: starting piece 1 at 29-OCT-20
channel XPLATFORM_BKUP_DISK1: finished piece 1 at 29-OCT-20
piece handle=/export/home/oracle/backups/transport_tblsp_hr_test_meta.dmp tag=TAG20201029T131156 comment=NONE
channel XPLATFORM_BKUP_DISK1: backup set complete, elapsed time: 00:00:01
Finished backup at 29-OCT-20

released channel: XPLATFORM_BKUP_DISK1

released channel: XPLATFORM_BKUP_DISK2

RMAN> exit

Copy Backup Files and Metadata File to New Platform

Using normal methods such as scp, copy the files generated by RMAN to the new platform. One caveat, is that until 19c, the data pump file for the metadata is copied to $ORACLE_HOME/dbs so you must make sure that you have sufficient space there for the process to complete successfully.

Restore / Convert Endian on New Platform

The last step in the process is the setup of the new PDB, creation of the user in the PDB and then use RMAN to transport the table spaces. The general process can be done simply by executing the following steps:

  • Create empty pluggable database and create target user in the new PDB:
SQL> select * from v$transportable_platform;
PLATFORM_ID PLATFORM_NAME                            ENDIAN_FORMAT      CON_ID
----------- ---------------------------------------- -------------- ----------
         13 Linux x86 64-bit                         Little                  0

SQL> CREATE PLUGGABLE DATABASE xttpdb ADMIN USER pdb_admin IDENTIFIED BY Welcome123 CREATE_FILE_DEST='+DATA';

Pluggable database created.

SQL> CREATE USER "HR" IDENTIFIED BY VALUES 'S:904B950378D93630E729A1A3051796718B7511BA10F5273F2F031E5CF76D;T:22D35135BC49EEAF5F4C896F26E2E8802D869C6451F39241BB4ACB76A38B7396E1C516DC78F0AFB1D3A96ABB761543673ABCF540CAC36927564F892FC148B436934AD38AD6D35C1EB8F3BB55F70BB02A'
DEFAULT TABLESPACE "USERS"
TEMPORARY TABLESPACE "TEMP"
PASSWORD EXPIRE
ACCOUNT LOCK;

SQL> GRANT "RESOURCE" TO "HR";

Grant succeeded.
SQL> GRANT CREATE SESSION TO "HR";

Grant succeeded.
SQL> GRANT ALTER SESSION TO "HR";

Grant succeeded.
SQL> GRANT UNLIMITED TABLESPACE TO "HR";

Grant succeeded.
SQL> GRANT CREATE SYNONYM TO "HR";

Grant succeeded.
SQL> GRANT CREATE VIEW TO "HR";

Grant succeeded.
SQL> GRANT CREATE SEQUENCE TO "HR";

Grant succeeded.
SQL> GRANT CREATE DATABASE LINK TO "HR";

Grant succeeded.
SQL> GRANT EXECUTE ON "SYS"."DBMS_STATS" TO "HR";

Grant succeeded.
SQL> ALTER USER "HR" DEFAULT ROLE ALL;

Grant succeeded.

  • Invoke RMAN and connect to the target PDB:

export NLS_DATE_FORMAT='DD-MON-YY HH24:MI:SS'
rman target sys@xttpdb

RUN
{
ALLOCATE CHANNEL XPLATFORM_BKUP_DISK1 DEVICE TYPE DISK;
ALLOCATE CHANNEL XPLATFORM_BKUP_DISK2 DEVICE TYPE DISK;
RESTORE
   ALL FOREIGN DATAFILES TO NEW
   FROM BACKUPSET '/archive/xttlnux/XTTSUN_xplat_bs36_piece1_20201029_14ve6dm4_1_1.bkp'
   BACKUPSET '/archive/xttlnux/XTTSUN_xplat_bs35_piece1_20201029_13ve6dm4_1_1.bkp'
   DUMP FILE FROM BACKUPSET '/archive/xttlnux/transport_tblsp_hr_test_meta.dmp';
RELEASE CHANNEL XPLATFORM_BKUP_DISK1;
RELEASE CHANNEL XPLATFORM_BKUP_DISK2;
}

using target database control file instead of recovery catalog
allocated channel: XPLATFORM_BKUP_DISK1
channel XPLATFORM_BKUP_DISK1: SID=512 instance=xttlnux2 device type=DISK

allocated channel: XPLATFORM_BKUP_DISK2
channel XPLATFORM_BKUP_DISK2: SID=615 instance=xttlnux1 device type=DISK

Starting restore at 29-OCT-20

channel XPLATFORM_BKUP_DISK1: starting datafile backup set restore
channel XPLATFORM_BKUP_DISK1: specifying datafile(s) to restore from backup set
channel XPLATFORM_BKUP_DISK1: restoring all foreign files in backup piece
channel XPLATFORM_BKUP_DISK1: reading from backup piece /archive/xttlnux/XTTSUN_xplat_bs36_piece1_20201029_14ve6dm4_1_1.bkp
channel XPLATFORM_BKUP_DISK2: starting datafile backup set restore
channel XPLATFORM_BKUP_DISK2: specifying datafile(s) to restore from backup set
channel XPLATFORM_BKUP_DISK2: restoring all foreign files in backup piece
channel XPLATFORM_BKUP_DISK2: reading from backup piece /archive/xttlnux/XTTSUN_xplat_bs35_piece1_20201029_13ve6dm4_1_1.bkp
channel XPLATFORM_BKUP_DISK1: restoring foreign file 17 to +DATA/XTTLNUX/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/users.929.1055079821
channel XPLATFORM_BKUP_DISK2: restoring foreign file 18 to +DATA/XTTLNUX/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/hr_test.928.1055079877
channel XPLATFORM_BKUP_DISK1: foreign piece handle=/archive/xttlnux/XTTSUN_xplat_bs36_piece1_20201029_14ve6dm4_1_1.bkp
channel XPLATFORM_BKUP_DISK1: restored backup piece 1
channel XPLATFORM_BKUP_DISK1: restore complete, elapsed time: 00:00:02
channel XPLATFORM_BKUP_DISK2: foreign piece handle=/archive/xttlnux/XTTSUN_xplat_bs35_piece1_20201029_13ve6dm4_1_1.bkp
channel XPLATFORM_BKUP_DISK2: restored backup piece 1
channel XPLATFORM_BKUP_DISK2: restore complete, elapsed time: 00:00:26
channel XPLATFORM_BKUP_DISK1: starting datafile backup set restore
channel XPLATFORM_BKUP_DISK1: specifying datafile(s) to restore from backup set
channel XPLATFORM_BKUP_DISK1: restoring Data Pump dump file to /u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/backup_tts_XTTLNUX_32652.dmp
channel XPLATFORM_BKUP_DISK1: reading from backup piece /archive/xttlnux/transport_tblsp_hr_test_meta.dmp
channel XPLATFORM_BKUP_DISK1: foreign piece handle=/archive/xttlnux/transport_tblsp_hr_test_meta.dmp
channel XPLATFORM_BKUP_DISK1: restored backup piece 1
channel XPLATFORM_BKUP_DISK1: restore complete, elapsed time: 00:00:00

Performing import of metadata...
   IMPDP> Master table "SYS"."TSPITR_IMP_XTTLNUX_CFfw" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_XTTLNUX_CFfw":
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/COMMENT
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TRIGGER
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   IMPDP> ORA-39082: Object type TRIGGER:"HR"."SECURE_EMPLOYEES" created with compilation warnings

ORA-39082: Object type TRIGGER:"HR"."UPDATE_JOB_HISTORY" created with compilation warnings

   IMPDP> Job "SYS"."TSPITR_IMP_XTTLNUX_CFfw" completed with 2 error(s) at Thu Oct 29 13:44:46 2020 elapsed 0 00:00:32
Import completed

Finished restore at 29-OCT-20

released channel: XPLATFORM_BKUP_DISK1

released channel: XPLATFORM_BKUP_DISK2

RMAN> exit

At this point, the tablespaces have been converted and plugged into the target PDB and all objects are ready for use!

Interesting Items of Note

Interestingly, after performing the above conversion and migration without any issues, and as I normally do, I started to poke around to see what “really” happened. When I did so I found something quite interesting. Upon looking at “v$datafile” I found that while RMAN did assign OMF filenames at restoration time, it assigned a totally different directory to those files than all of the other files in the PDB.

SQL> select file#, name from v$datafile;

     FILE# NAME
---------- ----------------------------------------------------------------------------
	29 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/system.932.1055165099
	30 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/sysaux.927.1055165099
	31 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/undotbs1.931.1055165099
	32 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/undo_2.925.1055165403
	33 +DATA/XTTLNUX/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/hr_test.919.1055165973
	34 +DATA/XTTLNUX/B2A8DA5A8E2A777EE0540010E05D13EA/DATAFILE/users.922.1055166029

As you may have noticed, there are 2 uuids present in ASM for the same PDB, “B2A8DA5A8E2A777EE0540010E05D13EA” and “B2E8839B69346990E05302EC090A3057”. Upon looking at the source system, it appears that despite specifying “TO NEW” in my RMAN RUN block, it still carried over the uuid from the old system. Of course there is nothing wrong with this other than files not being in the same directory and potentially causing an administration problem, but if you would like to fix it, normal methods apply one of which is “alter database move datafile ….”. The results of which can be seen below:

SQL> alter database move datafile 33;

Database altered.

Elapsed: 00:00:08.40	
SQL> alter database move datafile 34;

Database altered.

Elapsed: 00:00:00.15
SQL> select file#, name from v$datafile;

     FILE# NAME
---------- -----------------------------------------------------------------------------
	29 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/system.932.1055165099
	30 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/sysaux.927.1055165099
	31 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/undotbs1.931.1055165099
	32 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/undo_2.925.1055165403
	33 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/hr_test.957.1055166369
	34 +DATA/XTTLNUX/B2E8839B69346990E05302EC090A3057/DATAFILE/users.919.1055166403

6 rows selected.

As you can see, all of the files now reside within ASM under the same UUID and are also still OMF. Hopefully this will help you decide if this is an appropriate migration method to get you to PDB on a different platform!

Series: Be a Problem Solver Not a DBA #1

I’m going to try from time to time to publish some scenarios I have been in throughout my career where being a DBA doesn’t mean being a DBA, it means being a problem solver.

In my opinion being a good problem solver requires following several basic tenets, one of which is:

“When presented with competing hypotheses to solve a problem, one should select the solution with the fewest assumptions.” – William of Ockham

Having done many migrations throughout my career, I have learned that performing database migrations is much like a “Reality TV” script. Everything starts out with a plan, the plan is executed and usually, with days to go, there is a big risk that jeopardizes the project. All to be figured out in the end with a successful migration. A recent migration was no different, however this time, it was a perfect example of how to be a Problem Solver not a DBA.

The purpose of this post is to not fully explain problem solving methods, it is more to discuss going outside the comfort zone as a DBA and look at items that you may not normally look at. In this case, I know enough about java and java coding to be dangerous, but knew that the other resources looking at this weren’t going to solve the problem (per the vendor, there was only one person on the planet who could solve this and they were on a bus in Italy) so I had to take things into my own hands.

A little background:

This particular migration was an upgrade from 11.2.0.4 to 12.2.0.1 on a SuperCluster. About a week or so out, I saw a very high spike in memory utilization to the point where the system was out of memory. Upon investigation, we found out that their scheduling agent was utilizing a newer version of Java and in turn, using 4x more heap space than the previous version of Java.

Upon investigation, I found that the process was not utilizing either the -Xms or -Xmx flags when invoking the process so what changed between Java versions to cause the increased utilization?

Since we did not own that portion of the application, the issue was transferred to the responsible party to troubleshoot. After several days of no movement, I decided to put my “Problem Solving” had on.

Using the your tenets of problem solving follow a logical path:
After lots of searching, I tried to check the defaults of what the java uses for min heap and max heap by default. There was a big change from the old and new version. For example, the old version used:

java -XX:+PrintFlagsFinal -version | grep HeapSize
    uintx ErgoHeapSizeLimit               = 0               {product}
    uintx InitialHeapSize                := 126556928       {product}
    uintx LargePageHeapSizeThreshold      = 134217728       {product}
    uintx MaxHeapSize                    := 2025848832      {product}
java version "1.6.0_45"
Java(TM) SE Runtime Environment (build 1.6.0_45-b06)
Java HotSpot(TM) 64-Bit Server VM (build 20.45-b01, mixed mode)

While the new version version used:

java -XX:+PrintFlagsFinal -version | grep HeapSize
    uintx ErgoHeapSizeLimit               = 0              {product}
    uintx HeapSizePerGCThread             = 87241520       {product}
    uintx InitialHeapSize                := 2147483648     {product}
    uintx LargePageHeapSizeThreshold      = 134217728      {product}
    uintx MaxHeapSize                    := 32210157568    {product}
java version "1.8.0_172"
Java(TM) SE Runtime Environment (build 1.8.0_172-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.172-b11, mixed mode)

 

Ultimately, the solution was to add the “-Xms and -Xmx flags” to the program invoking the java process as to not utilize the environment defaults. In addition, this doesn’t waste infrastructure resources and also reduces time to invoke and close the java process by only assigning the memory that you need.

And as part of any problem solving exercise, focus from the bottom of the stack up, especially when multiple changes are in play.  In this case, the path with the least assumptions surrounded the changed java version so thats where I focused my effort.