Tag Archives: CloudSQL

More Obscure Things That Make It Go “Vacuum” in PostgreSQL

I previously blogged about ensuring that the “ON CONFLICT” directive is used in order to avoid vacuum from having to do additional work. I also later demonstrated the characteristics of how the use of the MERGE statement will accomplish the same thing.

You can read the original blogs here Reduce Vacuum by Using “ON CONFLICT” Directive and here Follow-Up: Reduce Vacuum by Using “ON CONFLICT” Directive

Now in another recent customer case, I was chasing down why the application was invoking 10s of thousands of Foreign Key and Constraint violations per day and I began to wonder, if these kinds of errors also caused additional vacuum as described in those previous blogs. Sure enough it DEPENDS.

Let’s set up a quick test to demonstrate:

/* Create related tables: */
CREATE TABLE public.uuid_product_value (
        id int PRIMARY KEY,
        pkid text,
        value numeric,
        product_id int,
        effective_date timestamp(3)
        );

CREATE TABLE public.uuid_product (
        product_id int PRIMARY KEY
        );

ALTER TABLE uuid_product_value
    ADD CONSTRAINT uuid_product_value_product_id_fk 
    FOREIGN KEY (product_id) 
    REFERENCES uuid_product (product_id) ON DELETE CASCADE;

/* Insert some mocked up data */
INSERT INTO public.uuid_product VALUES ( 
        generate_series(0,200));

INSERT INTO public.uuid_product_value VALUES ( 
        generate_series(0,10000), 
        gen_random_uuid()::text,
        random()*1000,
        ROUND(random()*100),
        current_timestamp(3));
 
/* Vacuum Analyze Both tables */
VACUUM (VERBOSE, ANALYZE) uuid_product;
VACUUM (VERBOSE, ANALYZE) uuid_product_value;

/* Verify that there are no dead tuples: */
SELECT
    schemaname,
    relname,
    n_live_tup,
    n_dead_tup
FROM
    pg_stat_all_tables
WHERE
    relname in ('uuid_product_value', 'uuid_product');
 
 schemaname |      relname       | n_live_tup | n_dead_tup
------------+--------------------+------------+------------
 public     | uuid_product_value |      10001 |          0
 public     | uuid_product       |        201 |          0

Then, let’s issue a simple insert that will violate the FK and check to see if dead tuples were generated:

/* Insert a row that violates the FK, without the ON CONFLICT directive */
INSERT INTO public.uuid_product_value VALUES ( 
        generate_series(10001,10001), 
        gen_random_uuid()::text,
        random()*1000,
        202,  /* we know this product_id doesn't exist in the parent */
        current_timestamp(3));

ERROR:  insert or update on table "uuid_product_value" violates foreign key constraint "uuid_mod_test_product_id_fk"
DETAIL:  Key (product_id)=(202) is not present in table "uuid_product".
Time: 3.065 ms

And now check the tuple stats:

SELECT
    schemaname,
    relname,
    n_live_tup,
    n_dead_tup
FROM
    pg_stat_all_tables
WHERE
    relname in ('uuid_product_value', 'uuid_product');

 schemaname |      relname       | n_live_tup | n_dead_tup
------------+--------------------+------------+------------
 public     | uuid_product_value |      10001 |          1
 public     | uuid_product       |        201 |          0

Sure enough, we now have a dead row as a result of the FK violation on the insert. But, will an “ON CONFLICT” directive help us in this scenario like in the others?

/* Insert a row that violates the FK, but with the ON CONFLICT directive */
INSERT INTO public.uuid_product_value VALUES ( 
        generate_series(10001,10001), 
        gen_random_uuid()::text,
        random()*1000,
        202,  /* we know this product_id doesn't exist in the parent */
        current_timestamp(3)) ON CONFLICT DO NOTHING;

/* Verify the tuple stats: */
SELECT
    schemaname,
    relname,
    n_live_tup,
    n_dead_tup
FROM
    pg_stat_all_tables
WHERE
    relname in ('uuid_product_value', 'uuid_product');

 schemaname |      relname       | n_live_tup | n_dead_tup
------------+--------------------+------------+------------
 public     | uuid_product_value |      10001 |          2
 public     | uuid_product       |        201 |          0

Unfortunately, it does not solve this problem. So we need to really be cognizant of FK violations and its effect on vacuum. Now what about trying to insert a NULL into a NOT NULL column? Will that result in a dead row? Let’s check.

/* Alter a column to NOT NULL */
ALTER TABLE public.uuid_product_value
    ALTER COLUMN pkid SET NOT NULL;

/* Check the table definition */
                        Table "public.uuid_product_value"
     Column     |              Type              | Collation | Nullable | Default
----------------+--------------------------------+-----------+----------+---------
 id             | integer                        |           | not null |
 pkid           | text                           |           | not null |
 value          | numeric                        |           |          |
 product_id     | integer                        |           |          |
 effective_date | timestamp(3) without time zone |           |          |
Indexes:
    "uuid_mod_test_pkey" PRIMARY KEY, btree (id)
    "uuid_mod_test_product_id_idx" btree (product_id) WHERE id >= 1 AND id <= 1000
    "uuid_mod_test_product_id_idx1" hash (product_id)
Foreign-key constraints:
    "uuid_mod_test_product_id_fk" FOREIGN KEY (product_id) REFERENCES uuid_product(product_id) ON DELETE CASCADE

/* Insert a row that violates the NOT NULL constraint */
INSERT INTO public.uuid_product_value VALUES ( 
        generate_series(10001,10001), 
        NULL,
        random()*1000,
        200,
        current_timestamp(3));
ERROR:  null value in column "pkid" of relation "uuid_product_value" violates not-null constraint
DETAIL:  Failing row contains (10001, null, 613.162063338205, 200, 2026-03-13 14:25:28.758).

/* Verify the tuple stats: */
SELECT
    schemaname,
    relname,
    n_live_tup,
    n_dead_tup
FROM
    pg_stat_all_tables
WHERE
    relname in ('uuid_product_value', 'uuid_product');

 schemaname |      relname       | n_live_tup | n_dead_tup
------------+--------------------+------------+------------
 public     | uuid_product_value |      10001 |          2
 public     | uuid_product       |        201 |          0

As you can see, a violation of the NOT NULL constraint does not have the same behavior as a violation of the FK constraint. It’s always good to know and relay to the application development staff what operations are going to result in more work for the database and adjust the code accordingly. Enjoy!

Do PostgreSQL Sub-Transactions Hurt Performance?

The short answer is always “maybe”. However, in the following post, I hope to demonstrate what creates a sub-transactions and what happens to the overall transaction id utilization when they are invoked. I will also show how performance is affected when there are lots of connections creating and consuming sub-transactions.

First, it is important to understand what statements will utilize a transaction id and which ones may be more critical (expensive) than others:

  • Calling Nested Procedures: 🟢 Free. No new XIDs are used. They share the parent’s transaction.
  • BEGIN…END (No Exception Block): 🟢 Free. Just organization.
  • COMMIT: 🟡 Expensive. Burns a main Transaction ID (finite resource, leads to Vacuum Freeze).
  • EXCEPTION: 🔴 Dangerous. Creates a subtransaction (performance killer).

So why is misplaced EXCEPTION logic possibly a performance killer? PostgreSQL is optimized to handle a small number of open subtransactions very efficiently. Each backend process has a fixed-size array in shared memory (part of the PGPROC structure) that can hold up to 64 open subtransaction IDs (XIDs) for the current top-level transaction. As long as your nesting depth stays below 64, PostgreSQL manages everything in this fast, local memory array. It does not need to use the Subtrans SLRU (Simple Least Recently Used) subsystem, which is what pg_stat_slru tracks. The problem is that the utilization of subtransaction IDs (XIDs) can get out of hand rather quickly if you are not paying attention to your application flow and once PostgreSQL runs out of fast RAM slots (PGPROC array) and spills tracking data to the slow SLRU cache (pg_subtrans), performance degrades non-linearly (often 50x–100x slower) and causes global locking contention that can freeze other users.

One of the other byproducts of utilizing too many sub-transactions is the additional WAL that will be generated. All of these can be demonstrated by a simple block of code:

------------------------------------------------------------------
-- 1. PREP: Create the helper function
------------------------------------------------------------------
CREATE OR REPLACE FUNCTION generate_subtrans_load(p_id int, depth int) 
RETURNS void AS $$
BEGIN
    IF depth > 0 THEN
        BEGIN
            PERFORM generate_subtrans_load(p_id, depth - 1);
        EXCEPTION WHEN OTHERS THEN NULL;
        END;
    ELSE
        INSERT INTO penalty_test VALUES (p_id, 'Deep Stack');
    END IF;
END;
$$ LANGUAGE plpgsql;

------------------------------------------------------------------
-- 2. THE COMPLETE TEST SUITE
------------------------------------------------------------------
DO $$ 
DECLARE
    -- Timing Variables
    v_start_ts  timestamp;
    v_t_base    interval; v_t_single interval; 
    v_t_48      interval; v_t_128    interval;

    -- Row Verification
    v_rows_base int;      v_rows_single int;
    v_rows_48   int;      v_rows_128    int;

    -- WAL Variables
    v_start_lsn pg_lsn;   v_end_lsn  pg_lsn;
    v_wal_base  numeric;  v_wal_single numeric;
    v_wal_48    numeric;  v_wal_128    numeric;

    -- XID Variables
    v_start_xid xid8;     v_end_xid  xid8;
    v_xid_base  bigint;   v_xid_single bigint;
    v_xid_48    bigint;   v_xid_128    bigint;

    i int;
    v_target_rows int := 5000; 
    
    -- Helper calc vars
    v_us_48 numeric;
    v_us_128 numeric;
BEGIN
    ---------------------------------------------------
    -- A. SETUP
    ---------------------------------------------------
    DROP TABLE IF EXISTS penalty_test;
    -- Explicitly preserve rows so COMMIT doesn't empty the table
    CREATE TEMP TABLE penalty_test (id int, val text) ON COMMIT PRESERVE ROWS;

    ---------------------------------------------------
    -- B. TEST 1: BASELINE (Standard Loop)
    ---------------------------------------------------
    -- 1. Force fresh start
    COMMIT; 
    v_start_ts  := clock_timestamp();
    v_start_lsn := pg_current_wal_insert_lsn();
    v_start_xid := pg_snapshot_xmax(pg_current_snapshot());

    FOR i IN 1..v_target_rows LOOP
        INSERT INTO penalty_test VALUES (i, 'Standard');
    END LOOP;
    
    -- 2. Force snapshot refresh to see XID consumption
    COMMIT; 
    v_end_lsn := pg_current_wal_insert_lsn();
    v_end_xid := pg_snapshot_xmax(pg_current_snapshot());
    v_t_base  := clock_timestamp() - v_start_ts;
    
    SELECT count(*) INTO v_rows_base FROM penalty_test;
    v_wal_base := v_end_lsn - v_start_lsn;
    v_xid_base := (v_end_xid::text::bigint - v_start_xid::text::bigint);

    ---------------------------------------------------
    -- C. TEST 2: SINGLE TRAP (Depth 1)
    ---------------------------------------------------
    TRUNCATE TABLE penalty_test;
    COMMIT; -- Clear stats
    
    v_start_ts  := clock_timestamp();
    v_start_lsn := pg_current_wal_insert_lsn();
    v_start_xid := pg_snapshot_xmax(pg_current_snapshot());

    FOR i IN 1..v_target_rows LOOP
        BEGIN
            INSERT INTO penalty_test VALUES (i, 'Single Trap');
        EXCEPTION WHEN OTHERS THEN NULL;
        END;
    END LOOP;
    
    COMMIT; -- Force refresh
    v_end_lsn := pg_current_wal_insert_lsn();
    v_end_xid := pg_snapshot_xmax(pg_current_snapshot());
    v_t_single := clock_timestamp() - v_start_ts;

    SELECT count(*) INTO v_rows_single FROM penalty_test;
    v_wal_single := v_end_lsn - v_start_lsn;
    v_xid_single := (v_end_xid::text::bigint - v_start_xid::text::bigint);

    ---------------------------------------------------
    -- D. TEST 3: SAFE ZONE (Depth 48)
    ---------------------------------------------------
    TRUNCATE TABLE penalty_test;
    COMMIT;
    
    v_start_ts  := clock_timestamp();
    v_start_lsn := pg_current_wal_insert_lsn();
    v_start_xid := pg_snapshot_xmax(pg_current_snapshot());

    FOR i IN 1..v_target_rows LOOP
        PERFORM generate_subtrans_load(i, 48);
    END LOOP;
    
    COMMIT;
    v_end_lsn := pg_current_wal_insert_lsn();
    v_end_xid := pg_snapshot_xmax(pg_current_snapshot());
    v_t_48    := clock_timestamp() - v_start_ts;

    SELECT count(*) INTO v_rows_48 FROM penalty_test;
    v_wal_48 := v_end_lsn - v_start_lsn;
    v_xid_48 := (v_end_xid::text::bigint - v_start_xid::text::bigint);

    ---------------------------------------------------
    -- E. TEST 4: OVERFLOW ZONE (Depth 128)
    ---------------------------------------------------
    TRUNCATE TABLE penalty_test;
    COMMIT;
    
    v_start_ts  := clock_timestamp();
    v_start_lsn := pg_current_wal_insert_lsn();
    v_start_xid := pg_snapshot_xmax(pg_current_snapshot());

    FOR i IN 1..v_target_rows LOOP
        PERFORM generate_subtrans_load(i, 128);
    END LOOP;
    
    COMMIT;
    v_end_lsn := pg_current_wal_insert_lsn();
    v_end_xid := pg_snapshot_xmax(pg_current_snapshot());
    v_t_128   := clock_timestamp() - v_start_ts;

    SELECT count(*) INTO v_rows_128 FROM penalty_test;
    v_wal_128 := v_end_lsn - v_start_lsn;
    v_xid_128 := (v_end_xid::text::bigint - v_start_xid::text::bigint);

    ---------------------------------------------------
    -- F. THE REPORT
    ---------------------------------------------------
    RAISE NOTICE '===================================================';
    RAISE NOTICE '    POSTGRESQL SUBTRANSACTION IMPACT REPORT      ';
    RAISE NOTICE '===================================================';
    RAISE NOTICE 'Target Rows: %', v_target_rows;
    RAISE NOTICE '---------------------------------------------------';
    
    RAISE NOTICE 'METRIC 1: EXECUTION TIME & VERIFICATION';
    RAISE NOTICE '  1. Baseline:         % (Rows: %)', v_t_base, v_rows_base;
    RAISE NOTICE '  2. Single Exception: % (Rows: %)', v_t_single, v_rows_single;
    RAISE NOTICE '  3. Safe Zone (48):   % (Rows: %)', v_t_48, v_rows_48;
    RAISE NOTICE '  4. Overflow (128):   % (Rows: %)', v_t_128, v_rows_128;
    RAISE NOTICE '---------------------------------------------------';

    v_us_48  := extract(epoch from v_t_48) * 1000000;
    v_us_128 := extract(epoch from v_t_128) * 1000000;

    RAISE NOTICE 'METRIC 2: AVERAGE COST PER SUBTRANSACTION';
    RAISE NOTICE '   - Safe Zone (48):     % us per subtrans', round(v_us_48 / (v_target_rows * 48), 2);
    RAISE NOTICE '   - Overflow Zone (128): % us per subtrans', round(v_us_128 / (v_target_rows * 128), 2);
    
    IF (v_us_128 / (v_target_rows * 128)) > (v_us_48 / (v_target_rows * 48)) THEN
        RAISE NOTICE '   -> RESULT: Overflow subtransactions were % %% slower per unit.', 
            round( ( ((v_us_128 / (v_target_rows * 128)) - (v_us_48 / (v_target_rows * 48))) / (v_us_48 / (v_target_rows * 48)) * 100)::numeric, 1);
    ELSE
         RAISE NOTICE '   -> RESULT: Overhead appears linear.';
    END IF;
    RAISE NOTICE '---------------------------------------------------';
    
    RAISE NOTICE 'METRIC 3: DISK USAGE (WAL WRITTEN)';
    RAISE NOTICE '  1. Baseline:         % bytes', v_wal_base;
    RAISE NOTICE '  2. Single Exception: % bytes', v_wal_single;
    RAISE NOTICE '  3. Safe Zone (48):   % bytes', v_wal_48;
    RAISE NOTICE '  4. Overflow (128):   % bytes', v_wal_128;
    RAISE NOTICE '---------------------------------------------------';

    RAISE NOTICE 'METRIC 4: TRANSACTION ID CONSUMPTION';
    RAISE NOTICE '  1. Baseline:         % XIDs', v_xid_base;
    RAISE NOTICE '  2. Single Exception: % XIDs', v_xid_single;
    RAISE NOTICE '  3. Safe Zone (48):   % XIDs', v_xid_48;
    RAISE NOTICE '  4. Overflow (128):   % XIDs', v_xid_128;
    
    RAISE NOTICE '===================================================';
END $$;

The code block above shows 4 different scenarios:

  • Baseline: Simple insert that inserts a number of rows in a loop with no exception logic
  • Single Exception (Depth 1): The same insert loop but with an exception block that fires after every insert
  • 48 Sub-transaction Loop (Depth 48): A function call that arbitrarily creates 48 subtransactions (well below the 64 limit) and then completes the same insert
  • 128 Sub-transaction Loop (Depth 128): A function call that arbitrarily creates 128 subtransactions (well above the 64 limit) and then completes the same insert

What you will see is that when run by a single user the impact is not terribly great. Besides utilizing more transaction ids (XIDs) and generating much wal, the system and performance impact of this does not appear to be a big deal. When running for a 5000 row insert:

NOTICE:  ===================================================
NOTICE:      POSTGRESQL SUBTRANSACTION IMPACT REPORT
NOTICE:  ===================================================
NOTICE:  Target Rows: 5000
NOTICE:  ---------------------------------------------------
NOTICE:  METRIC 1: EXECUTION TIME & VERIFICATION
NOTICE:    1. Baseline:         00:00:00.020292 (Rows: 5000)
NOTICE:    2. Single Exception: 00:00:00.033419 (Rows: 5000)
NOTICE:    3. Safe Zone (48):   00:00:01.414132 (Rows: 5000)
NOTICE:    4. Overflow (128):   00:00:03.817267 (Rows: 5000)
NOTICE:  ---------------------------------------------------
NOTICE:  METRIC 2: AVERAGE COST PER SUBTRANSACTION
NOTICE:     - Safe Zone (48):     5.89 us per subtrans
NOTICE:     - Overflow Zone (128): 5.96 us per subtrans
NOTICE:     -> RESULT: Overflow subtransactions were 1.2 % slower per unit.
NOTICE:  ---------------------------------------------------
NOTICE:  METRIC 3: DISK USAGE (WAL WRITTEN)
NOTICE:    1. Baseline:         48 bytes
NOTICE:    2. Single Exception: 43936 bytes
NOTICE:    3. Safe Zone (48):   2076216 bytes
NOTICE:    4. Overflow (128):   5536832 bytes
NOTICE:  ---------------------------------------------------
NOTICE:  METRIC 4: TRANSACTION ID CONSUMPTION
NOTICE:    1. Baseline:         1 XIDs
NOTICE:    2. Single Exception: 5001 XIDs
NOTICE:    3. Safe Zone (48):   240001 XIDs
NOTICE:    4. Overflow (128):   640002 XIDs
NOTICE:  ===================================================
DO
Time: 5294.689 ms (00:05.295)

As you can see in the results above, due to the transaction tracking, each scenario burns more XIDs and creates more WAL. This is due to the additional transaction tracking that must be available should the WAL need to be replayed. Where the real killer comes in is when multiple connections all begin to execute this same code. When I did a synthetic test using Apache JMeter the results are astounding. The test was completed on a 4vCPU C4A AlloyDB Instance:

TestAverage Time Per ExecMin Exec TimeMax EXEC Time
1 User x 50 iterations2137ms2069ms3477ms
4 Users x 50 iterations3171ms2376ms4080ms
10 Users x 50 iterations9041ms4018ms47880ms

As more users are added, the time is increased due to the time needed to manage the XID space and write the additional WAL. Some of this is increased time is due to the elongated connection time due to the XID space. When the same test was executed with a Managed Connection Pooler, times came down a little bit:

TestAverage Time Per ExecMin Exec TimeMax EXEC Time
10 Users x 50 iterations6399ms2690ms10624ms

Now you ask, how can I monitor for this performance impact? The verdict is that it depends. Wide variations in execution time depending on database load may be one indication. Another indication may be entries in the pg_stat_slru table (however depending on the available ram, metrics may not appear here) and a final indication will always be WAL usage. In summary:

MetricWhat it tells youWhat it hides
Execution Time“My query is slow.”It doesn’t explain why (CPU vs Lock vs I/O). Additional investigation may be required.
pg_stat_slru“Disk is thrashing.”It reads 0 if you have enough RAM, hiding the fact that your CPU is burning up managing memory locks.
WAL VolumeThe Real Truth.It proves you are writing massive amounts of metadata (savepoint markers) to disk, even if the data volume is small.

When considering the three scenarios:

  • Tier 1: Standard Loop (Baseline)
    • Mechanism: One single transaction for the whole batch.
    • Overhead: Near zero.
    • Verdict: 🟢 Safe. This is how Postgres is designed to work.
  • Tier 2: The “Safety Trap” (Exception Block)
    • Mechanism: Uses BEGIN…EXCEPTION inside the loop.
    • Hidden Cost: Every single iteration creates a Subtransaction. This burns a Subtransaction ID and forces a WAL write to create a “savepoint” it can roll back to.
    • Verdict: 🟡 Risky. It is 3x–5x slower and generates massive Write-Ahead Log (WAL) bloat, even for successful inserts.
  • Tier 3: The “Overflow” Disaster (Depth > 64)
    • Mechanism: Nesting subtransactions deeper than 64 layers (or having >64 active savepoints).
    • The Cliff: PostgreSQL runs out of fast RAM slots (PGPROC array) and must spill tracking data to the slow SLRU cache (pg_subtrans).
    • Verdict: 🔴 Catastrophic. Performance degrades non-linearly (often 50x–100x slower) and causes global locking contention that can freeze other users.

Final Recommendation:
If you need to handle errors in a bulk load (e.g., “Insert 10,000 rows, skip the ones that fail”):

  • DO validate data before the insert to filter out bad rows in the application layer or a staging table
  • Do NOT wrap every insert in an EXCEPTION block. i.e a LOOP
  • Use EXCEPTION logic purposefully and avoid the need for CATCH ALL like “WHEN OTHERS”
  • DO use INSERT … ON CONFLICT DO NOTHING (if the error is unique constraint)

Would you like to read more and get additional perspective?
https://postgres.ai/blog/20210831-postgresql-subtransactions-considered-harmful

Hope this helps you understand the effect of subtransactions in PostgreSQL!

Understanding High Water Mark Locking Issues in PostgreSQL Vacuums

I recently had a customer that wanted to leverage read replicas to ensure that their read queries were not going to impeded with work being done on the primary instance and also required an SLA of at worst a few seconds. Ultimately they weren’t meeting the SLA and my colleagues and I were asked to look at what was going on.

The first thing we came to understand is that the pattern of work on the primary is a somewhat frequent large DELETE statement followed by a data refresh accomplished by a COPY from STDIN command against a partitioned table with 16 hash partitions.

The problem being observed was that periodically the SELECTs occurring on the read replica would time out and not meet the SLA. Upon investigation, we found that the “startup” process on the read replica would periodically request an “exclusive lock” on some random partition. This exclusive lock would block the SELECT (which is partition unaware) and then cause the timeout. But what is causing the timeout?

After spending some time investigating, the team was able to correlate the exclusive lock with a routine “autovacuum” occurring on the primary. But why was it locking? After inspection of the WAL, it turns out that it the issue was due to a step in the vacuum process whereby it tries to return free pages at the end of the table back to the OS, truncation of the High Water Mark (HWM). Essentially the lock is requested on the primary and then transmitted to the replica via the WAL so that the tables can be kept consistent.

To confirm that it was in fact the step in VACUUM that truncates the HWM, we decided to alter each partition of the table to allow VACUUM to skip that step:

ALTER TABLE [table name / partition name] SET (vacuum_truncate = false);

After letting this run for 24 hours, we in fact saw no further blocking locks causing SLA misses on the replicas. Should we worry about shrinking the High Water Mark (HWM)? Well as with everything in IT, it depends. Other DBMS engines like Oracle do not shrink the High Water Mark (HWM), typically maintenance operations such as DBMS_REDEF or ALTER TABLE … SHRINK SPACE / SHRINK SPACE COMPACT deal with that. So now that we are talking about PostgreSQL do we need to worry about it? This is where the pg_freespacemap extension can help. We can use this extension and a script to check to see if in fact the High Water Mark (HWM) is growing or staying put. If it is growing, we can just execute a regular VACUUM with an additional option called TRUNCATE to handle it:

VACUUM (verbose, truncate true) [schema].[table name];

When you do this, you will see one additional message in the VACUUM output signifying that the VACUUM truncated the High Water Mark (HWM):

INFO:  table "large_table": truncated 302534 to 302233 pages

As I stated earlier, we can use pg_freespacemap to see if we actually need to worry about the High Water Mark (HWM) growing. I could have taken a lot of time to write a script to figure it out, but instead, I enlisted Google Gemini to see what it would come up with. After a few iterations, the output was nearly perfect!

CREATE EXTENSION pg_freespacemap;

CREATE OR REPLACE FUNCTION show_empty_pages(p_table_name TEXT)
RETURNS VOID AS $$
DECLARE
    -- Core processing variables
    table_oid_regclass  REGCLASS;
    block_size          BIGINT;
    fsm_granularity     BIGINT;
    max_fsm_free_space  BIGINT;
    total_pages         BIGINT;
    high_water_mark     BIGINT := 0;

    -- Variables for the final summary
    first_empty_block   BIGINT;
    free_pages_at_end   BIGINT;
    free_space_at_end   TEXT;
BEGIN
    -- Setup
    table_oid_regclass := p_table_name::regclass;
    block_size  := current_setting('block_size')::bigint;
    SELECT relpages INTO total_pages FROM pg_class WHERE oid = table_oid_regclass;
    fsm_granularity    := block_size / 256;
    max_fsm_free_space := floor((block_size - 24) / fsm_granularity) * fsm_granularity;

    --------------------------------------------------------------------------------
    -- PASS 1: FIND THE HIGH-WATER MARK (last page with data)
    --------------------------------------------------------------------------------
    FOR i IN REVERSE (total_pages - 1)..0 LOOP
        IF pg_freespace(table_oid_regclass, i) < max_fsm_free_space THEN
            high_water_mark := i;
            EXIT;
        END IF;
    END LOOP;

    --------------------------------------------------------------------------------
    -- FINAL STEP: CALCULATE AND RAISE THE SUMMARY NOTICE
    --------------------------------------------------------------------------------
    first_empty_block := high_water_mark + 1;
    free_pages_at_end := total_pages - first_empty_block;
    IF free_pages_at_end < 0 THEN
        free_pages_at_end := 0;
    END IF;
    free_space_at_end := pg_size_pretty(free_pages_at_end * block_size);

    RAISE NOTICE '-------------------------------------------------------------';
    RAISE NOTICE 'Summary for table: %', p_table_name;
    RAISE NOTICE '-------------------------------------------------------------';
    RAISE NOTICE 'The High Water Mark (HWM) is at page: %', total_pages;
    IF total_pages <> first_empty_block THEN
    	RAISE NOTICE 'First potentially empty page is at: %', first_empty_block;
    	RAISE NOTICE 'Total Pages in Table: %', total_pages;
    	RAISE NOTICE 'Number of potentially truncatable pages at the end: %', free_pages_at_end;
    	RAISE NOTICE 'Amount of free space at the end of the table: %', free_space_at_end;
    ELSE
    	RAISE NOTICE 'There are no empty pages to truncate';
    END IF;
    RAISE NOTICE '-------------------------------------------------------------';
END;
$$ LANGUAGE plpgsql;

This handy script could be periodically executed to check the High Water Mark (HWM) and will produce the following output:

(postgres@10.3.1.17:5432) [postgres] > SELECT * FROM show_empty_pages('public.large_table');
NOTICE:  -------------------------------------------------------------
NOTICE:  Summary for table: public.large_table
NOTICE:  -------------------------------------------------------------
NOTICE:  The High Water Mark (HWM) is at page: 302534
NOTICE:  First potentially empty page is at: 302233
NOTICE:  Total Pages in Table: 302534
NOTICE:  Number of potentially truncatable pages at the end: 301
NOTICE:  Amount of free space at the end of the table: 2408 kB
NOTICE:  -------------------------------------------------------------

If there is no freespace after the last full block the output will look like this:

NOTICE:  -------------------------------------------------------------
NOTICE:  Summary for table: public.large_table
NOTICE:  -------------------------------------------------------------
NOTICE:  The High Water Mark (HWM) is at page: 302233
NOTICE:  There are no empty pages to truncate
NOTICE:  -------------------------------------------------------------

So while there is no right answer on how to deal with this, ensure you know the implications of each step in the process. In this case, we have decided to turn the “vacuum_truncation” option to false, but maybe another option might be to tune vacuum in another way such as either making it more or less frequent. Always evaluate your own situation, but in any case it’s always good to know what happens in your database when certain commands are executed.

Enjoy!

Follow-Up: Reduce Vacuum by Using “ON CONFLICT” Directive

I previously blogged about ensuring that the “ON CONFLICT” directive is used in order to avoid vacuum from having to do additional work. You can read the original blog here: Reduce Vacuum by Using “ON CONFLICT” Directive

Now that Postgres has incorporated the “MERGE” functionality into Postgres 15 and above, I wanted to ensure that there was no “strange” behavior as it relates to vacuum when using merge. As you can see here, the “MERGE” functionality does perform exactly as expected. For example, when you attempt to have a merge where the directive is to try an insert first followed by an update, exactly one row is marked dead when the insert fails and the update succeeds.

/* Create the table: */
CREATE TABLE public.pk_violation_test (
        id int PRIMARY KEY, 
        value numeric,
        product_id int,
        effective_date timestamp(3)
        );
 
 
/* Insert some mocked up data */
INSERT INTO public.pk_violation_test VALUES ( 
        generate_series(0,10000), 
        random()*1000,
        random()*100,
        current_timestamp(3));
 
/* Verify that there are no dead tuples: */
SELECT
    schemaname,
    relname,
    n_live_tup,
    n_dead_tup
FROM
    pg_stat_all_tables
WHERE
    relname = 'pk_violation_test';
 
 schemaname |      relname      | n_live_tup | n_dead_tup
------------+-------------------+------------+------------
 public     | pk_violation_test |    100001  |          0

Then, create a simple merge and check the results:

WITH insert_query AS (
    SELECT
        0 AS id,
        44.33893489873 AS value,
        46 AS product_id,
        now() AS effective_date) MERGE INTO pk_violation_test pkt
    USING insert_query i ON pkt.id = i.id
    WHEN MATCHED THEN
        UPDATE SET
            value = i.value, product_id = i.product_id, effective_date = i.effective_date
    WHEN NOT MATCHED THEN
        INSERT (id, value, product_id, effective_date)
            VALUES (i.id, i.value, i.product_id, i.effective_date);
MERGE 1

And then check the dead tuple count:

SELECT
    schemaname,
    relname,
    n_live_tup,
    n_dead_tup
FROM
    pg_stat_all_tables
WHERE
    relname = 'pk_violation_test';
 schemaname |      relname      | n_live_tup | n_dead_tup
------------+-------------------+------------+------------
 public     | pk_violation_test |      10001 |          1
(1 row)

As expected only one row is marked dead. Merge is such great functionality and I am glad to see it in Postgres. As you get time, all of your “ON CONFLICT” statements should be converted to use this functionality. Enjoy!

Use of “pgtt” Extension in Self Managed vs. Cloud Products

There are several benefits to using temp tables in SQL DML operations. Given my long relationship and work on Oracle databases, I am highly familiar with the Oracle style “Global Temporary Tables” and have used them to solve many performance issues when complex joins are involved. That’s why I was excited when Gilles Darold released the extension “pgtt” for PostgreSQL. I was even more excited when Google released the extension for use in the Google CloudSQL for Postgres and AlloyDB for Postgres products.

In my opinion, native Postgres temporary tables leave a lot to be desired. For example, each table has to be created upon use. Nothing is persistent. This takes time and leads to Postgres catalog bloat (looking to do a post on this soon). That being said, in self managed instances, ‘pgtt’ is very easy to use. You can load the extension either at connection time by user, or for anyone that connects to a database. For example (make sure the extension is created in the database before setting these):

alter role test_user SET session_preload_libraries to 'pgtt';

alter database test_pgtt SET session_preload_libraries = 'pgtt';

This will, by default, use the ‘pgtt’ version of temporary tables instead of the default temporary tables Postgres offers. You can then create ‘pgtt’ style tables and refer to them via the pgtt_schema with no further modifications.

To revert back in the session, you must “disconnect” the session alter the user / database to revert the behavior to “old” style temporary tables. To my knowledge, if the library is enabled this way there is no way to use both styles of temporary tables at the same time.

alter role test_user RESET session_preload_libraries;

alter database pgtt_test RESET session_preload_libraries;

The Set Up…..

/* set flag "pgtt.enabled" on alloydb / cloudsql postgres console  to on */

/* create the extension in every database you need to use global temp tables */
CREATE EXTENSION pgtt;
LOAD 'pgtt';

/* Additional extensions needed for this test */
CREATE EXTENSION "uuid-ossp";

/* create the persistent global temporary table */
CREATE /*GLOBAL*/ TEMPORARY TABLE test_gtt_table (
id integer,
lbl text) ON COMMIT DELETE ROWS;

/* create an index on the global temp table */
CREATE INDEX ON pgtt_schema.test_gtt_table (id);

Use of ‘pgtt’ in Cloud Database Products

When you look at managed Cloud databases (from any vendor), you have to understand that some items need to be locked down in order to safeguard against the “managed” nature of the product. Unfortunately (as of this writing), at least in Google Cloud Postgres products, you cannot manipulate the parameter “session_preload_libraries”. So then you ask “How can I programmatically use the extension”? As the time of this writing, I have found only two ways. Either the user has to explicitly execute “LOAD ‘pgtt'” at the beginning / at some point in their session or the “LOAD” must be embedded in a function / procedure similar to the following:

/* function definition with loading of extension as a work around */
create or replace function public.test_gtt_function(P_IN_IDS integer[])
	returns table (
		id integer,
		lbl text
	)
    language plpgsql 
AS $func$
BEGIN
LOAD 'pgtt';
	/* revised query below */
	INSERT INTO pgtt_schema.test_gtt_table
		select generate_series,substr(uuid_generate_v4()::text,1,15) FROM generate_series(1, 5000);
	RETURN QUERY
	SELECT mygtt.id,mygtt.lbl from pgtt_schema.test_gtt_table mygtt where mygtt.id = ANY (P_IN_IDS);
END
$func$;

As you can see in the above function, the “LOAD” directive is the first thing in the function. While I am not a fan of having to do it this way, there doesn’t seem to be a huge performance impact but one nonetheless that you must account for:

(postgres@##########:5432) [tpcc] > LOAD 'pgtt';
LOAD
Time: 3.811 ms
(postgres@##########:5432) [tpcc] > \watch 1
LOAD
Time: 2.922 ms
LOAD
Time: 1.219 ms
LOAD
Time: 1.335 ms
LOAD
Time: 1.300 ms
LOAD
Time: 1.282 ms
LOAD

One reason I do like the implementation within the function is that it is easier to remove the directive vs the directive being embedded within the code. Either way the right direction is to get the vendor to allow targeted values for this parameter. This will allow the most flexibility and the least amount of awareness and hassle when using this extension.

So What Happens If You Don’t Load The Library??

So if you do not load the library with one of the options above, the functions still works. But what happens? The table actually behaves as a “real” table and the data is never truncated… Even after disconnection:

create or replace function public.test_gtt_wo_load_function(P_IN_IDS integer[])
	returns table (
		id integer,
		lbl text
	)
    language plpgsql 
AS $func$
BEGIN
	/* revised query below */
	INSERT INTO pgtt_schema.test_gtt_table
		select generate_series,substr(uuid_generate_v4()::text,1,15) FROM generate_series(1, 5000);
	RETURN QUERY
	SELECT mygtt.id,mygtt.lbl from pgtt_schema.test_gtt_table mygtt where mygtt.id = ANY (P_IN_IDS);
END
$func$;

(postgres@#######:5432) [tpcc] > \set ids ARRAY[3,28,69,235,389,512,678,899]
(postgres@#######:5432) [tpcc] > select * from public.test_gtt_wo_load_function(:ids);
 id  |       lbl
-----+-----------------
   3 | 2f82b04f-db13-4
  28 | 49dfea9a-5c41-4
  69 | d2d27343-a8d4-4
 235 | d5f32cd9-9495-4
 389 | 69842fc5-f0e5-4
 512 | e52c1625-0fab-4
 678 | 0fd320a9-8f08-4
 899 | 452a5f95-6e16-4
(8 rows)

(postgres@#######:5432) [tpcc] > select * from public.test_gtt_wo_load_function(:ids);
 id  |       lbl
-----+-----------------
   3 | 2f82b04f-db13-4
   3 | 632e097d-0f80-4
  28 | 49dfea9a-5c41-4
  28 | bf9ed2c4-14d2-4
  69 | d2d27343-a8d4-4
........
 899 | 452a5f95-6e16-4
 899 | d40c915f-04c7-4
(16 rows)

(postgres@#######:5432) [tpcc] > select * from public.test_gtt_wo_load_function(:ids);
 id  |       lbl
-----+-----------------
   3 | 2f82b04f-db13-4
   3 | 632e097d-0f80-4
   3 | 9cd65fda-ab4a-4
  28 | 49dfea9a-5c41-4
  28 | bf9ed2c4-14d2-4
  28 | 6cdd9a22-5c60-4
.........
 899 | 452a5f95-6e16-4
 899 | d40c915f-04c7-4
 899 | 0319d855-a935-4
(24 rows)

(postgres@#######:5432) [tpcc] > \q
shaneborden_google_com@replication-instance-1:/home/shaneborden_google_com $ psql -h ###### -p 5432 -U postgres -d tpcc
Password for user postgres:
(postgres@#######:5432) [tpcc] > \set ids ARRAY[3,28,69,235,389,512,678,899]
(postgres@#######:5432) [tpcc] > select * from public.test_gtt_wo_load_function(:ids);
 id  |       lbl
-----+-----------------
   3 | 82f66c20-f484-4
   3 | 01ad1b4d-2790-4
   3 | a3c38d05-fb7d-4
   3 | 7a4b2ff8-3e32-4
   3 | af8a96e1-194c-4
   3 | 02586796-1cc6-4
  28 | 62b1fb31-d3c0-4
  28 | 3acfe744-574f-4
  28 | 1e2d64fa-1106-4
  28 | d92a9321-4330-4
  28 | f3b1d45e-d4a7-4
  28 | 270146eb-1098-4
  69 | 506ad97c-fdad-4
  69 | bf3340d5-17a2-4
  69 | 47cbd9eb-83da-4
  69 | 487e8b22-4a05-4
  69 | b17314bc-5822-4
.......
 678 | a8d7dc45-c027-4
 678 | 17b666e5-1fc1-4
 678 | 4386eeb4-7b4e-4
 678 | 2942f25f-3d13-4
 678 | 3853bac0-e0ba-4
 678 | 95499bef-4440-4
 899 | 4503351c-4fe4-4
 899 | 900337bf-a658-4
 899 | 5f1f02f8-29a3-4
 899 | d635c921-8f9b-4
 899 | 0dd42f22-0f8e-4
 899 | 16bb8dcf-4e35-4
(48 rows)

Pretty crazy. So if for some reason the LOAD does not occur or fails, there could be an issue where the temp table has duplicate rows.

So what happens if the “pgtt” temp table was never created? You get a failure as you expect:

(postgres@#######:5432) [tpcc] > select * from public.test_gtt_wo_load_function(:ids);
ERROR:  relation "pgtt_schema.test_gtt_table" does not exist
LINE 1: INSERT INTO pgtt_schema.test_gtt_table
                    ^
QUERY:  INSERT INTO pgtt_schema.test_gtt_table
select generate_series,substr(uuid_generate_v4()::text,1,15) FROM generate_series(1, 5000)
CONTEXT:  PL/pgSQL function test_gtt_wo_load_function(integer[]) line 4 at SQL statement
Time: 5.204 ms

So it is important to ensure that if you intend to use “pgtt” you have a programmatic method to load the extension….. and in the meantime, I will be working on my end to get a cleaner way to load it in the managed products I have access to.

Using the “hint_plan” Table Provided by the PostgreSQL Extension “pg_hint_plan”

Introduction

For those who have worked with Oracle, the pg_hint_plan extension is one that will allow you to hint plans in patterns that you are likely very familiar with:

  • sql_patch
  • sql_profile
  • sql_plan_baselines

While currently, the functionality provided by pg_hint_plan is not nearly as robust (hints list), it does provide most of what you would encounter day to day as a DBA. That being said, one thing that is currently missing is the ability to easily add hints without changing code via stored_procedures / functions like in Oracle. The only way to currently do this in Open Source PostgreSQL is to manually manipulate a table named “hints” typically located in the “hint_plan” schema.

The “hints” table which is provided by the extension is highly dependent (just like Oracle) on a normalized SQL statement. A normalized SQL statement in PostgreSQL is one that has all carriage returns removed, all spaces converted to single spaces and all literals and parameters replaced with a “?”. Typically you have to do this manually, but in this blog post, I am going to show how I have leveraged entries in “pg_stat_statements” along with custom written functions to normalize the statement and place it into the “hints” table. To use this “hints” table feature, the following setting must be enabled at either the session or system level:

set session pg_hint_plan.enable_hint_table to on;
or

in the postgresql.conf:
pg_hint_plan.enable_hint_table to on;

What Does a Normalized Statement Look Like?

Typically, when you receive code from a developer or even code that you work on yourself, you format it in order to to make it human readable and easier to interpret. For example, you might want your statement to look like this (notice the parameters / literals in the statement:

SELECT
    b.bid,
    sum(abalance)
FROM
    pgbench_branches b
    JOIN pgbench_accounts a ON (b.bid = a.bid)
WHERE
    b.bid = 12345
    AND a.aid BETWEEN 100 AND 200
GROUP BY
    b.bid
ORDER BY
    1;

Now to normalize the statement for use with the “hints” table it needs to look like this:

select b.bid, sum(abalance) from pgbench_branches b join pgbench_accounts a on (b.bid = a.bid) where b.bid = ? and a.aid between ? and ? group by b.bid order by 1;

You can either manually manipulate the statement to get it in this format do this or we can attempt to do it programmatically. I prefer as much as possible to let the system format it for me so I have written a few helper scripts to do this:

Helper Queries:

**** Feel free to utilize these functions, however they may contain errors or may not normalize all statements. They depend on the pg_stat_statements table and if the entire statement will not fit within the query field of that table, then these functions will not produce the correct output. I will also place them on my public github. If you find any errors or omissions, please let me know. ****

hint_plan.display_candidate_pg_hint_plan_queries

While you can easily select from the “hints” table on your own, this query will show what a normalized statement will look like before loading it to the table. You can leave the “p_query_id” parameter null to return all queries present in the pg_stat_statements in a normalized form or you can populate it with a valid “query_id” and it will return a single normalized statement:

CREATE OR REPLACE FUNCTION hint_plan.display_candidate_pg_hint_plan_queries(
  p_query_id bigint default null
  )
  RETURNS TABLE(queryid bigint, norm_query_string text)
  LANGUAGE 'plpgsql'
  COST 100
  VOLATILE PARALLEL UNSAFE
AS $BODY$
 DECLARE 
 	pg_stat_statements_exists boolean := false;
 BEGIN
   SELECT EXISTS (
    SELECT FROM 
        information_schema.tables 
    WHERE 
        table_schema LIKE 'public' AND 
        table_type LIKE 'VIEW' AND
        table_name = 'pg_stat_statements'
    ) INTO pg_stat_statements_exists;
   IF pg_stat_statements_exists AND p_query_id is not null THEN
    RETURN QUERY
    SELECT pss.queryid,
           substr(regexp_replace(
             regexp_replace(
                regexp_replace(
                   regexp_replace(
                      regexp_replace(pss.query, '\$\d+', '?', 'g'),
                                E'\r', ' ', 'g'),
                              E'\t', ' ', 'g'),
                           E'\n', ' ', 'g'),
                         '\s+', ' ', 'g') || ';',1,100)
 	FROM pg_stat_statements pss where pss.queryid = p_query_id;
   ELSE
    RETURN QUERY
    SELECT pss.queryid,
           substr(regexp_replace(
             regexp_replace(
                regexp_replace(
                   regexp_replace(
                      regexp_replace(pss.query, '\$\d+', '?', 'g'),
                                E'\r', ' ', 'g'),
                              E'\t', ' ', 'g'),
                           E'\n', ' ', 'g'),
                         '\s+', ' ', 'g') || ';',1,100)
 	FROM pg_stat_statements pss;
   END IF;
 END; 
$BODY$;

If our candidate query was this:

select queryid, query from pg_stat_statements where queryid =  -8949523101378282526;
       queryid        |            query
----------------------+-----------------------------
 -8949523101378282526 | select b.bid, sum(abalance)+
                      | from pgbench_branches b    +
                      | join pgbench_accounts a    +
                      | on (b.bid = a.bid)         +
                      | where b.bid = $1           +
                      | group by b.bid             +
                      | order by 1
(1 row)

The display function would return the following normalized query:

SELECT hint_plan.display_candidate_pg_hint_plan_queries(p_query_id => -8949523101378282526);
-[ RECORD 1 ]--------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------
display_candidate_pg_hint_plan_queries | (-8949523101378282526,"select b.bid, sum(abalance) from pgbench_branches b join pgbench_accounts a on (b.bid = a.bid) where b.bid = ? group by b.bid order by 1;")

You can then verify that the query is normalized properly and then move on toward using the next function to add the normalized query to the “hints” table.

hint_plan.add_stored_pg_hint_plan

Using the same query in the previous section, we will now add it to the “hints” table. This is where it is important to understand what hint you want to add.

CREATE OR REPLACE FUNCTION hint_plan.add_stored_pg_hint_plan(
  p_query_id bigint,
  p_hint_text text,
  p_application_name text default ''
  )
  RETURNS varchar
  LANGUAGE 'plpgsql'
  COST 100
  VOLATILE PARALLEL UNSAFE
AS $BODY$
-- p_hint_text can contain one or more hints either separated by a space or
-- a carriage return character.  Examples include:
-- Space Separated: SeqScan(a) Parallel(a 0 hard)
-- ASCII CRLF Separated: SeqScan(a)'||chr(10)||'Parallel(a 0 hard)
-- Single Hint: SeqScan(a)
-- 
-- Escaped text does not work: /* E'SeqScan(a)\nParallel(a 0 hard)'
 DECLARE 
 	hint_id hint_plan.hints.id%TYPE;
 	normalized_query_text hint_plan.hints.norm_query_string%TYPE;
 	pg_stat_statements_exists boolean := false;
 BEGIN
   SELECT EXISTS (
    SELECT FROM 
        information_schema.tables 
    WHERE 
        table_schema LIKE 'public' AND 
        table_type LIKE 'VIEW' AND
        table_name = 'pg_stat_statements'
    ) INTO pg_stat_statements_exists;
   IF NOT pg_stat_statements_exists THEN
    RAISE NOTICE 'pg_stat_statements extension has not been loaded, exiting';
    RETURN 'error';
   ELSE
    SELECT regexp_replace(
             regexp_replace(
                regexp_replace(
                   regexp_replace(
                      regexp_replace(query, '\$\d+', '?', 'g'),
                                E'\r', ' ', 'g'),
                              E'\t', ' ', 'g'),
                           E'\n', ' ', 'g'),
                         '\s+', ' ', 'g') || ';'
 	 INTO normalized_query_text
 	 FROM pg_stat_statements where queryid = p_query_id;
     IF normalized_query_text IS NOT NULL THEN
		INSERT INTO hint_plan.hints(norm_query_string, application_name, hints)
    	VALUES (normalized_query_text,
    			p_application_name,
    			p_hint_text
    	);
    	SELECT id into hint_id
    	FROM hint_plan.hints
    	WHERE norm_query_string = normalized_query_text;
 	    RETURN cast(hint_id as text);
     ELSE
 		RAISE NOTICE 'Query ID %q does not exist in pg_stat_statements', cast(p_query_id as text);
 		RETURN 'error';
     END IF;
   END IF;
 END; 
$BODY$;

Hint text contain one or more hints either separated by a space or a carriage return character. Examples include:

  • Space Separated: SeqScan(a) Parallel(a 0 hard)
  • ASCII CRLF Separated: SeqScan(a)’||chr(10)||’Parallel(a 0 hard)
  • Single Hint: SeqScan(a)
  • Escaped text does not work in the context of this function although this can be used if you are inserting manually to the “hints” table: E’SeqScan(a)\nParallel(a 0 hard)’
SELECT hint_plan.add_stored_pg_hint_plan(p_query_id => -8949523101378282526,
						p_hint_text => 'SeqScan(a) Parallel(a 0 hard)',
						p_application_name => '');

-[ RECORD 1 ]-----------+---
add_stored_pg_hint_plan | 28

Time: 40.889 ms

select * from hint_plan.hints where id = 28;
-[ RECORD 1 ]-----+------------------------------------------------------------------------------------------------------------------------------------------
id                | 28
norm_query_string | select b.bid, sum(abalance) from pgbench_branches b join pgbench_accounts a on (b.bid = a.bid) where b.bid = ? group by b.bid order by 1;
application_name  |
hints             | SeqScan(a) Parallel(a 0 hard)

In the above example, we are forcing a serial sequential scan of the “pgbench_accounts”. We left the “application name” parameter empty so that the hint applies to any calling application.

hint_plan.delete_stored_pg_hint_plan

You could easily just issue a delete against the “hints” table, but in keeping with utilizing a “function” approach to utilizing this functionality, a delete helper has also been developed:

CREATE OR REPLACE FUNCTION hint_plan.delete_stored_pg_hint_plan(
  p_hint_id bigint
  )
  RETURNS TABLE(id integer, norm_query_string text, application_name text, hints text)
  LANGUAGE 'plpgsql'
  COST 100
  VOLATILE PARALLEL UNSAFE
AS $BODY$
 BEGIN
    RETURN QUERY
    DELETE FROM hint_plan.hints h WHERE h.id = p_hint_id RETURNING *;
 END; 
$BODY$;

To delete a plan you can call the procedure as follows:

 SELECT hint_plan.delete_stored_pg_hint_plan(p_hint_id => 28);
-[ RECORD 1 ]--------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
delete_stored_pg_hint_plan | (28,"select b.bid, sum(abalance) from pgbench_branches b join pgbench_accounts a on (b.bid = a.bid) where b.bid = ? group by b.bid order by 1;","","SeqScan(a) Parallel(a 0 hard)")

Time: 33.685 ms
select * from hint_plan.hints where id = 28;
(0 rows)

Time: 24.868 ms

As you can see the “hints” table is very useful and can help you emulate many parts of SQL Plan Management just like in Oracle.

Enjoy and all feedback is welcomed!!!

Leverage Google Cloud Logging + Monitoring for Custom Cloud SQL for Postgres or AlloyDB Alerts

As migrations to CloudSQL and AlloyDB pick up speed, inevitably you will run into a condition where the cloud tooling has not quite caught up with exposing custom alerts and incidents that you may be exposing on-premises with tools such as Nagios or Oracle Enterprise Manager. One such example is monitoring of replication tools such as the GoldenGate Heartbeat table. While there are many ways that you may be able to implement this, I wanted to demonstrate a way to leverage Google Cloud Logging + Google Cloud Monitoring. Using this method will allow us to keep a long term log of certain parameters like lag or anything else you have built into the heartbeat mechanism. To demonstrate, lets use Python to query the database and create a Cloud Logging Entry:

import argparse
from datetime import datetime, timedelta
from sqlalchemy import create_engine, text
from google.cloud import logging


def retrievePgAlert(
    username: str,
    password: str,
    hostname: str,
    portNumber: int,
    databaseName: str,
    alertType: str,
) -> None:

    alertList: list = []

    conn_string = f"postgresql+psycopg2://{username}:{password}@{hostname}:{portNumber}/{databaseName}?client_encoding=utf8"
    engine = create_engine(conn_string)
    with engine.connect() as con:

        if alertType == "ogg-lag":
            sqlQuery = text(
                f"select replicat, effective_date, lag from ogg.heartbeat where lag >=:lagAmt and effective_date >= now() - interval ':intervalAmt min'"
            )

        result = con.execute(
            sqlQuery, {"lagAmt": oggLagAmt, "intervalAmt": checkIntervalMinutes}
        ).fetchall()
        for row in result:
            alertList.append(row)

        if not alertList:
            print(f"No alerts as of {datetime.now().strftime('%m/%d/%Y %H:%M:%S')}")
        else:
            for alertText in alertList:
                print(
                    f"Replicat: {alertText[0]} at date {alertText[1]} has a total lag of: {alertText[2]} seconds"
                )

            writeGcpCloudLoggingAlert(
                logger_alert_type=alertType,
                loggerName=args.loggerName,
                logger_message=alertList,
            )

    con.close()
    engine.dispose()


def writeGcpCloudLoggingAlert(
    logger_alert_type: str,
    loggerName: str,
    logger_message: list,
) -> None:

    # Writes log entries to the given logger.
    logging_client = logging.Client()

    # This log can be found in the Cloud Logging console under 'Custom Logs'.
    logger = logging_client.logger(loggerName)

    # Struct log. The struct can be any JSON-serializable dictionary.
    if logger_alert_type == "ogg-lag":
        replicatName: str
        effectiveDate: datetime
        lagAmount: str

        for alertFields in logger_message:
            replicatName = alertFields[0]
            effectiveDate = alertFields[1]
            lagAmount = int(alertFields[2])

            logger.log_struct(
                {
                    "alertType": logger_alert_type,
                    "replicat": str(alertFields[0]),
                    "alertDate": alertFields[1].strftime("%m/%d/%Y, %H:%M:%S"),
                    "alertRetrievalDate": datetime.now().strftime("%m/%d/%Y, %H:%M:%S"),
                    "lagInSeconds": int(alertFields[2]),
                },
                severity="ERROR",
            )

    print("Wrote logs to {}.".format(logger.name))


def delete_logger(loggerName):
    """Deletes a logger and all its entries.

    Note that a deletion can take several minutes to take effect.
    """
    logging_client = logging.Client()
    logger = logging_client.logger(loggerName)

    logger.delete()

    print("Deleted all logging entries for {}".format(logger.name))


if __name__ == "__main__":

    cloudSQLHost: str = "127.0.0.1"
    hostname: str
    portNumber: str
    database: str
    username: str
    password: str
    oggLagAmt: int = 15
    checkIntervalMinutes: int = 20

    with open("~/.pgpass", "r") as pgpassfile:
        for line in pgpassfile:
            if line.strip().split(":")[0] == cloudSQLHost:
                hostname, portNumber, database, username, password = line.strip().split(
                    ":"
                )

    parser = argparse.ArgumentParser(
        description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter
    )
    parser.add_argument(
        "-loggerName",
        "--loggerName",
        type=str,
        help="GCP Cloud Log Namespace",
        default="postgres-alert",
    )
    parser.add_argument(
        "-alertType",
        "--alertType",
        type=str,
        help="Type of alert to log",
        default="ogg-lag",
    )
    args = parser.parse_args()

    if args.alertType == "ogg-lag":
        retrievePgAlert(
            hostname=hostname,
            username=username,
            password=password,
            portNumber=portNumber,
            databaseName=database,
            alertType=args.alertType,
        )

In this script we utilize the Google Cloud Logging APIs, SQLAlchemy and some other basic python imports to query the database based on a lag amount we are looking for from the heartbeat table.

***Note: The query within the python code could check for any condition by changing the query, by leveraging “gcloud” commands or REST API calls.

If the condition is met, the script creates a JSON message which is then written to the appropriate Google Cloud Logging Namespace. An example of the JSON message is below (sensitive information like the project id and instance id have been redacted):

{
  "insertId": "1b6fb35g18b606n",
  "jsonPayload": {
    "alertRetrievalDate": "01/20/2023, 18:47:20",
    "lagInSeconds": 15,
    "alertType": "ogg-lag",
    "alertDate": "01/20/2023, 18:34:55",
    "replicat": "r_hr"
  },
  "resource": {
    "type": "gce_instance",
    "labels": {
      "project_id": "[project id]",
      "instance_id": "****************",
      "zone": "projects/[project id]/zones/us-central1-c"
    }
  },
  "timestamp": "2023-01-20T18:47:20.103058301Z",
  "severity": "ERROR",
  "logName": "projects/[project id]/logs/postgres-alert",
  "receiveTimestamp": "2023-01-20T18:47:20.103058301Z"
}

Create a Cloud Logging Alert

Now that we have published a message to Cloud Logging, what can we do with it? Generally there are two paths, either a Cloud Metric or a Cloud Alert. For this demonstration, we will use the “Cloud Alert”. So to start the setup navigate to the console page “Operations Logging” —> “Logs Explorer”. From there click the “Create alert” function. The following dialog will show. You will need to double check the query to retrieve the appropriate logs in step 2, and in step 3, you can choose the time between notifications (this is to mute alerts that happen in between the interval) and how long past the last alert an incident will stay open. In this case, we will mute duplicate alerts that happen for 5 minutes after the first alert (if an alert occurs at 6 minutes another notification will fire) and incidents will remain open for 30 minutes past the last alert (no new incidents will be logged unless an alert occurs after that time frame). The query to be used within the alert is as follows:


logName="projects/[project id]/logs/postgres-alert"
AND severity="ERROR"
AND (jsonPayload.alertType = "ogg-lag")
AND (jsonPayload.lagInSeconds >= 15)
AND resource.labels.instance_id = [instance id]

The following dialogues outline the screens used to setup the alert.

The last step will be to choose your notification method, which is managed by different notification channels. The different types of notification channels include:

  • Mobile Devices
  • PagerDuty Services
  • PagerDuty Sync
  • Slack
  • Webhooks
  • E-Mail
  • SMS
  • Pub/Sub

Once all of this is defined, your alert is now set to notify once you place the python script on an appropriate schedule such as linux cron, Google Cloud Scheduler, etc. In this case we will now wait for an issue to occur that conforms to the alert. When it does an email like the following will result to the notification channel:

As your migration to cloud continues, keep an open mind and look for alternative ways to handle all of the operational “things” you are accustomed to in your on-premises environment. Most of the time there is a way in cloud to handle it!

Tuning the PostgreSQL “random_page_cost” Parameter

In my previous post “Three Configuration Parameters for PostgreSQL That Are Worth Further Investigation!“, I introduced three PostgreSQL parameters that I feel are the most “neglected” and present the most opportunity for performance tuning in a PostgreSQL instance. I’ve previously posted parts 1 and 2 which cover “work_mem” and “effective_io_concurrency“, so in the final part of this series, I would like to demonstrate tuning the “random_page_cost” parameter.

Because PostgreSQL has the ability to be installed on many different types of systems, the default for this parameter represents a system that is likely the least performant, one that has low CPU and a disk subsystem that is less than ideal. This setting can be overridden at the individual object level as well, however that may represent a management nightmare so I would recommend against that. A good explanation of the parameter exists here, and for most CloudSQL instances, should likely be set lower than the default because random page costs are expected to be less expensive on the types of I/O subsystems are present within today’s cloud environments.

For those of you that come from Oracle backgrounds, this parameter is very much like the “OPTIMIZER_INDEX_COST_ADJ” parameter that we used to manipulate in older Oracle versions. To refresh your mind on this parameter you can see the 19c explanation here.

As a simple example of how the query plan can change for a simple SQL, I will first show the query plan with the default setting of 4. While it is using an index, the access path could be better:

set max_parallel_workers_per_gather = 0;
set session random_page_cost to 4;

explain (analyze, verbose, costs, settings, buffers, wal, timing, summary, format text)
SELECT c.c_name, c.c_acctbal, sum(o.o_totalprice)
FROM orders o
JOIN customer c ON (c.c_custkey = o.o_custkey)
WHERE c.c_custkey = 30003 and o.o_orderstatus = 'O'
GROUP BY c.c_name, c.c_acctbal;

                                                                             QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
 GroupAggregate  (cost=76.28..76.37 rows=1 width=57) (actual time=0.034..0.035 rows=0 loops=1)
   Output: c.c_name, c.c_acctbal, sum(o.o_totalprice)
   Group Key: c.c_name, c.c_acctbal
   Buffers: shared hit=7
   ->  Sort  (cost=76.28..76.30 rows=8 width=33) (actual time=0.033..0.034 rows=0 loops=1)
         Output: c.c_name, c.c_acctbal, o.o_totalprice
         Sort Key: c.c_name, c.c_acctbal
         Sort Method: quicksort  Memory: 25kB
         Buffers: shared hit=7
         ->  Nested Loop  (cost=4.97..76.16 rows=8 width=33) (actual time=0.027..0.028 rows=0 loops=1)
               Output: c.c_name, c.c_acctbal, o.o_totalprice
               Buffers: shared hit=7
               ->  Index Scan using customer_pk on public.customer c  (cost=0.42..8.44 rows=1 width=31) (actual time=0.014..0.015 rows=1 loops=1)
                     Output: c.c_custkey, c.c_mktsegment, c.c_nationkey, c.c_name, c.c_address, c.c_phone, c.c_acctbal, c.c_comment
                     Index Cond: (c.c_custkey = '30003'::numeric)
                     Buffers: shared hit=4
               ->  Bitmap Heap Scan on public.orders o  (cost=4.55..67.64 rows=8 width=14) (actual time=0.009..0.009 rows=0 loops=1)
                     Output: o.o_orderdate, o.o_orderkey, o.o_custkey, o.o_orderpriority, o.o_shippriority, o.o_clerk, o.o_orderstatus, o.o_totalprice, o.o_comment
                     Recheck Cond: (o.o_custkey = '30003'::numeric)
                     Filter: (o.o_orderstatus = 'O'::bpchar)
                     Buffers: shared hit=3
                     ->  Bitmap Index Scan on order_customer_fkidx  (cost=0.00..4.55 rows=16 width=0) (actual time=0.008..0.008 rows=0 loops=1)
                           Index Cond: (o.o_custkey = '30003'::numeric)
                           Buffers: shared hit=3
 Settings: effective_cache_size = '3053008kB', effective_io_concurrency = '10', max_parallel_workers_per_gather = '0', work_mem = '512MB'
 Query Identifier: 7272380376793434809
 Planning:
   Buffers: shared hit=2
 Planning Time: 0.234 ms
 Execution Time: 0.076 ms

And now with a change to a setting of 2, we get a different access path:

set max_parallel_workers_per_gather = 0;
set session random_page_cost to 2;

explain (analyze, verbose, costs, settings, buffers, wal, timing, summary, format text)
SELECT c.c_name, c.c_acctbal, sum(o.o_totalprice)
FROM orders o
JOIN customer c ON (c.c_custkey = o.o_custkey)
WHERE c.c_custkey = 30003 and o.o_orderstatus = 'O'
GROUP BY c.c_name, c.c_acctbal;

                                                                             QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
 GroupAggregate  (cost=39.38..39.48 rows=1 width=57) (actual time=0.027..0.028 rows=0 loops=1)
   Output: c.c_name, c.c_acctbal, sum(o.o_totalprice)
   Group Key: c.c_name, c.c_acctbal
   Buffers: shared hit=7
   ->  Sort  (cost=39.38..39.40 rows=8 width=33) (actual time=0.026..0.027 rows=0 loops=1)
         Output: c.c_name, c.c_acctbal, o.o_totalprice
         Sort Key: c.c_name, c.c_acctbal
         Sort Method: quicksort  Memory: 25kB
         Buffers: shared hit=7
         ->  Nested Loop  (cost=0.85..39.26 rows=8 width=33) (actual time=0.021..0.022 rows=0 loops=1)
               Output: c.c_name, c.c_acctbal, o.o_totalprice
               Buffers: shared hit=7
               ->  Index Scan using customer_pk on public.customer c  (cost=0.42..4.44 rows=1 width=31) (actual time=0.012..0.012 rows=1 loops=1)
                     Output: c.c_custkey, c.c_mktsegment, c.c_nationkey, c.c_name, c.c_address, c.c_phone, c.c_acctbal, c.c_comment
                     Index Cond: (c.c_custkey = '30003'::numeric)
                     Buffers: shared hit=4
               ->  Index Scan using order_customer_fkidx on public.orders o  (cost=0.43..34.75 rows=8 width=14) (actual time=0.008..0.008 rows=0 loops=1)
                     Output: o.o_orderdate, o.o_orderkey, o.o_custkey, o.o_orderpriority, o.o_shippriority, o.o_clerk, o.o_orderstatus, o.o_totalprice, o.o_comment
                     Index Cond: (o.o_custkey = '30003'::numeric)
                     Filter: (o.o_orderstatus = 'O'::bpchar)
                     Buffers: shared hit=3
 Settings: effective_cache_size = '3053008kB', effective_io_concurrency = '10', max_parallel_workers_per_gather = '0', random_page_cost = '2', work_mem = '512MB'
 Query Identifier: 7272380376793434809
 Planning:
   Buffers: shared hit=2
 Planning Time: 0.199 ms
 Execution Time: 0.064 ms

Now, for this simple example, the execution time isn’t vastly different, because in both cases an index is being used, however, in cases where the parameter adjustment allows an index to be used over a sequential scan, you will really see the benefit.

Ultimately, there are some other parameters that may benefit from adjustment such as the “cpu_*” parameters, however, those will require much more testing and experimentation over the adjustment of “random_page_cost” especially if your system is running SSDs as in most Google CloudSQL for Postgres instances or even Google AlloyDB where the I/O subsystem is built specifically for the implementation. And if you use either of these implementations, I would highly consider updating this parameter from the default of 4 to at least 2, maybe even 1.1 depending on the shape that you have chosen and the I/O limits served by each Shape.

Enjoy!

Tuning the PostgreSQL “work_mem” Parameter

In my previous post “Three Configuration Parameters for PostgreSQL That Are Worth Further Investigation!“, I introduced three PostgreSQL parameters that I feel are the most “neglected” and present the most opportunity for performance tuning in a PostgreSQL instance. In the first of what will become a three part series, I would like to demonstrate tuning the “work_mem” parameter.

The “work_mem” parameter optimizes database operations such as:

  • sorts
  • bitmap heap scans
  • hash joins
  • materialized common table expressions (WITH statements)

To get started, lets create a test table with 100M rows of data:

CREATE TABLE public.work_mem_test (
		id int PRIMARY KEY, 
		value numeric,
		product_id int,
		effective_date timestamp(3)
		);
INSERT INTO public.work_mem_test VALUES ( 
		generate_series(0,100000000), 
		random()*1000,
		random()*100,
		current_timestamp(3));

CREATE INDEX prod_value_idx ON public.work_mem_test (product_id);
VACUUM ANALYZE public.work_mem_test;

We will then run an explain analyze with the “COSTS, BUFFERS, VERBOSE” options so that we can fully see what is going on with the query. For demonstration purposes, I have set the “work_mem” to the lowest possible setting of 64kB. In addition, so that we don’t get the variability of parallel processing I have set the “max_parallel_workers_per_gather” to zero to disable parallel processing. Most systems may also experience better gains than this test case as this was a very small 2 vCPU / 8GB Google CloudSQL PostgreSQL instance:

set session work_mem to '64kB';
set max_parallel_workers_per_gather = 0;
set effective_io_concurrency = 20;
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
SELECT * FROM public.work_mem_test
WHERE id BETWEEN 10000 AND 100000
OR product_id BETWEEN 100 AND 200
ORDER BY value NULLS FIRST;

---------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=2640424.97..2641959.63 rows=613866 width=27) (actual time=16593.228..16778.132 rows=589379 loops=1)
   Output: id, value, product_id, effective_date
   Sort Key: work_mem_test.value NULLS FIRST
   Sort Method: external merge  Disk: 24248kB
   Buffers: shared hit=6 read=363702, temp read=15340 written=16486
   I/O Timings: read=4120.104
   ->  Bitmap Heap Scan on public.work_mem_test  (cost=7805.12..2539439.23 rows=613866 width=27) (actual time=82.454..15413.822 rows=589379 loops=1)
         Output: id, value, product_id, effective_date
         Recheck Cond: (((work_mem_test.id >= 10000) AND (work_mem_test.id <= 100000)) OR ((work_mem_test.product_id >= 100) AND (work_mem_test.product_id <= 200)))
         Rows Removed by Index Recheck: 48506004
         Heap Blocks: exact=2058 lossy=360975
         Buffers: shared hit=6 read=363702
         I/O Timings: read=4120.104
         ->  BitmapOr  (cost=7805.12..7805.12 rows=614306 width=0) (actual time=80.340..80.342 rows=0 loops=1)
               Buffers: shared hit=6 read=669
               I/O Timings: read=17.683
               ->  Bitmap Index Scan on work_mem_test_pkey  (cost=0.00..1280.95 rows=82638 width=0) (actual time=12.680..12.680 rows=90001 loops=1)
                     Index Cond: ((work_mem_test.id >= 10000) AND (work_mem_test.id <= 100000))
                     Buffers: shared hit=3 read=247
                     I/O Timings: read=7.831
               ->  Bitmap Index Scan on prod_value_idx  (cost=0.00..6217.24 rows=531667 width=0) (actual time=67.657..67.657 rows=499851 loops=1)
                     Index Cond: ((work_mem_test.product_id >= 100) AND (work_mem_test.product_id <= 200))
                     Buffers: shared hit=3 read=422
                     I/O Timings: read=9.852
 Query Identifier: 731698457235411789
 Planning:
   Buffers: shared hit=39 read=2
   I/O Timings: read=2.588
 Planning Time: 3.136 ms
 Execution Time: 16811.970 ms
(30 rows)

Time: 16903.784 ms (00:16.904)

EXPLAIN, via the BUFFERS keyword gives us the following data points:

Rows Removed by Index Recheck: 48506004
Heap Blocks: exact=2058 lossy=360975


Execution Time: 16811.970 ms

This essentially means that the 64kB of work_mem can hold 2058 blocks in the bitmap structure within that work_mem size. To get the remainder of the results, everything that falls out of that bitmap
are lossy blocks, meaning that they don’t point to an exact tuple, but to rather a block with many tuples. The recheck condition then checks that block for the tuples the query is looking for.

The following formula is a starting point, but may or may not give you the exact setting needed based on various factors. Since we used the lowest possible work_mem, the setting becomes a multiple of that:

new_mem_in_mbytes = 
 ((exact heap blocks + lossy heap blocks) / exact heap blocks) * work_mem_in_bytes / 1048576
= ceil(round(((2058 + 360975) / 2058) * 65536 / 1048576,1))
= 11MB

Note: In most cases, I have found that this formula has worked well on the first pass, however as you will see in the subsequent tests, this estimated work_mem setting wasn’t quite close to the actual amount needed and this is likely due to a mis-estimate by the planner

Reducing the Lossy Block Access

So for the next test I will increase the “work_mem” to 11MB and re-execute the test.

set session work_mem to '11MB';
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
SELECT * FROM public.work_mem_test
WHERE id BETWEEN 10000 AND 100000
OR product_id BETWEEN 100 AND 200
ORDER BY value NULLS FIRST;

---------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=2160340.77..2161875.43 rows=613866 width=27) (actual time=11382.002..11574.572 rows=589379 loops=1)
   Output: id, value, product_id, effective_date
   Sort Key: work_mem_test.value NULLS FIRST
   Sort Method: external merge  Disk: 24232kB
   Buffers: shared hit=23329 read=340379, temp read=3029 written=3034
   I/O Timings: read=3618.302
   ->  Bitmap Heap Scan on public.work_mem_test  (cost=7805.12..2090832.53 rows=613866 width=27) (actual time=185.251..10923.764 rows=589379 loops=1)
         Output: id, value, product_id, effective_date
         Recheck Cond: (((work_mem_test.id >= 10000) AND (work_mem_test.id = 100) AND (work_mem_test.product_id   BitmapOr  (cost=7805.12..7805.12 rows=614306 width=0) (actual time=132.954..132.957 rows=0 loops=1)
               Buffers: shared hit=675
               ->  Bitmap Index Scan on work_mem_test_pkey  (cost=0.00..1280.95 rows=82638 width=0) (actual time=4.449..4.450 rows=90001 loops=1)
                     Index Cond: ((work_mem_test.id >= 10000) AND (work_mem_test.id   Bitmap Index Scan on prod_value_idx  (cost=0.00..6217.24 rows=531667 width=0) (actual time=128.503..128.503 rows=499851 loops=1)
                     Index Cond: ((work_mem_test.product_id >= 100) AND (work_mem_test.product_id <= 200))
                     Buffers: shared hit=425
 Query Identifier: 731698457235411789
 Planning:
   Buffers: shared hit=30
 Planning Time: 0.179 ms
 Execution Time: 11611.071 ms
(26 rows)

Time: 11695.952 ms (00:11.696)

With 11MB, we got more exact heap blocks, but still not enough memory to process. Applying the formula based on the execution plan of the query….

new_mem_in_mbytes = 
 ((exact heap blocks + lossy heap blocks) / exact heap blocks) * work_mem_in_bytes / 1048576
= ceil(round(((164090 + 198943) / 164090) * 12582912 / 1048576,1));
= 24MB


Let’s increase just a little bit more to 24MB as the latest iteration of the formula has suggested.

set session work_mem to '24MB';
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
SELECT * FROM public.work_mem_test
WHERE id BETWEEN 10000 AND 100000
OR product_id BETWEEN 100 AND 200
ORDER BY value NULLS FIRST;

---------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=1709385.33..1710919.99 rows=613866 width=27) (actual time=3651.589..3791.250 rows=589379 loops=1)
   Output: id, value, product_id, effective_date
   Sort Key: work_mem_test.value NULLS FIRST
   Sort Method: external merge  Disk: 24232kB
   Buffers: shared hit=23329 read=340379, temp read=3029 written=3031
   I/O Timings: read=1493.162
   ->  Bitmap Heap Scan on public.work_mem_test  (cost=7805.12..1639877.09 rows=613866 width=27) (actual time=348.261..3201.421 rows=589379 loops=1)
         Output: id, value, product_id, effective_date
         Recheck Cond: (((work_mem_test.id >= 10000) AND (work_mem_test.id = 100) AND (work_mem_test.product_id   BitmapOr  (cost=7805.12..7805.12 rows=614306 width=0) (actual time=188.309..188.311 rows=0 loops=1)
               Buffers: shared hit=675
               ->  Bitmap Index Scan on work_mem_test_pkey  (cost=0.00..1280.95 rows=82638 width=0) (actual time=5.090..5.091 rows=90001 loops=1)
                     Index Cond: ((work_mem_test.id >= 10000) AND (work_mem_test.id   Bitmap Index Scan on prod_value_idx  (cost=0.00..6217.24 rows=531667 width=0) (actual time=183.215..183.215 rows=499851 loops=1)
                     Index Cond: ((work_mem_test.product_id >= 100) AND (work_mem_test.product_id <= 200))
                     Buffers: shared hit=425
 Query Identifier: 731698457235411789
 Planning:
   Buffers: shared hit=30
 Planning Time: 0.242 ms
 Execution Time: 3828.215 ms
(25 rows)

Time: 3912.670 ms (00:03.913)

No more lossy block scans! Time has also reduced quite significantly from the first execution.

Handling the Sort Method

Now, we need to pay attention to the explain plan line:

 "Bitmap Heap Scan on public.work_mem_test….rows=613866 width=27" 
 "Sort Method: external merge Disk: 24232kB."

Some of the sort is in memory and some is spilled to disk. So in order to fit the entire rowset in memory, we must multiply the input rows by the width, which is 16MB. In addition, the planner spilled another 24MB to disk, so let’s add that also to “work_mem”.

16Mb + 24Mb which is being spilled = 40Mb more "work_mem"

So with the current “work_mem” of 24MB plus the additional computed to remove the sort (rounded up), the total needed is 64MB.

Lets run one more test:

set session work_mem to '64MB';
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
SELECT * FROM public.work_mem_test
WHERE id BETWEEN 10000 AND 100000
OR product_id BETWEEN 100 AND 200
ORDER BY value NULLS FIRST;

---------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=613087.41..614622.07 rows=613866 width=27) (actual time=3186.918..3309.896 rows=589379 loops=1)
   Output: id, value, product_id, effective_date
   Sort Key: work_mem_test.value NULLS FIRST
   Sort Method: quicksort  Memory: 61169kB
   Buffers: shared hit=6 read=363702
   I/O Timings: read=1306.892
   ->  Bitmap Heap Scan on public.work_mem_test  (cost=7805.12..554071.67 rows=613866 width=27) (actual time=245.348..2908.344 rows=589379 loops=1)
         Output: id, value, product_id, effective_date
         Recheck Cond: (((work_mem_test.id >= 10000) AND (work_mem_test.id = 100) AND (work_mem_test.product_id   BitmapOr  (cost=7805.12..7805.12 rows=614306 width=0) (actual time=115.051..115.053 rows=0 loops=1)
               Buffers: shared hit=6 read=669
               I/O Timings: read=3.561
               ->  Bitmap Index Scan on work_mem_test_pkey  (cost=0.00..1280.95 rows=82638 width=0) (actual time=6.160..6.161 rows=90001 loops=1)
                     Index Cond: ((work_mem_test.id >= 10000) AND (work_mem_test.id   Bitmap Index Scan on prod_value_idx  (cost=0.00..6217.24 rows=531667 width=0) (actual time=108.889..108.889 rows=499851 loops=1)
                     Index Cond: ((work_mem_test.product_id >= 100) AND (work_mem_test.product_id <= 200))
                     Buffers: shared hit=3 read=422
                     I/O Timings: read=2.231
 Query Identifier: 731698457235411789
 Planning:
   Buffers: shared hit=30
 Planning Time: 0.180 ms
 Execution Time: 3347.271 ms
(28 rows)

Time: 3431.188 ms (00:03.431)

With that adjustment, we have significantly increased the efficiency and performance of the query. From the beginning, just by tuning “work_mem”, we have shaved approximately 13.5 seconds of processing time!

What about a top-N Heapsort??

Now if we want to demonstrate a top-N Heapsort, we can change the query just a little bit more:

set session work_mem to '64MB';
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
SELECT * FROM public.work_mem_test
WHERE id BETWEEN 10000 AND 100000
OR product_id BETWEEN 100 AND 200
ORDER BY value NULLS FIRST LIMIT 10;

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=567337.09..567337.12 rows=10 width=27) (actual time=3021.185..3021.190 rows=10 loops=1)
   Output: id, value, product_id, effective_date
   Buffers: shared hit=6 read=363702
   I/O Timings: read=1313.044
   ->  Sort  (cost=567337.09..568871.76 rows=613866 width=27) (actual time=3021.183..3021.186 rows=10 loops=1)
         Output: id, value, product_id, effective_date
         Sort Key: work_mem_test.value NULLS FIRST
         Sort Method: top-N heapsort  Memory: 26kB
         Buffers: shared hit=6 read=363702
         I/O Timings: read=1313.044
         ->  Bitmap Heap Scan on public.work_mem_test  (cost=7805.12..554071.67 rows=613866 width=27) (actual time=235.909..2911.978 rows=589379 loops=1)
               Output: id, value, product_id, effective_date
               Recheck Cond: (((work_mem_test.id >= 10000) AND (work_mem_test.id = 100) AND (work_mem_test.product_id   BitmapOr  (cost=7805.12..7805.12 rows=614306 width=0) (actual time=108.429..108.431 rows=0 loops=1)
                     Buffers: shared hit=6 read=669
                     I/O Timings: read=3.114
                     ->  Bitmap Index Scan on work_mem_test_pkey  (cost=0.00..1280.95 rows=82638 width=0) (actual time=5.582..5.582 rows=90001 loops=1)
                           Index Cond: ((work_mem_test.id >= 10000) AND (work_mem_test.id   Bitmap Index Scan on prod_value_idx  (cost=0.00..6217.24 rows=531667 width=0) (actual time=102.845..102.845 rows=499851 loops=1)
                           Index Cond: ((work_mem_test.product_id >= 100) AND (work_mem_test.product_id <= 200))
                           Buffers: shared hit=3 read=422
                           I/O Timings: read=2.037
 Query Identifier: 4969544646514690020
 Planning:
   Buffers: shared hit=30
 Planning Time: 0.177 ms
 Execution Time: 3023.304 ms
(32 rows)

Time: 3107.421 ms (00:03.107)

Because we are only returning the top N rows, the memory used is not as high because a different sort methodology can be used. In addition, the time is further reduced.

As you can see with, a little tuning of the “work_mem” parameter, lots of performance can be gained in the system. In this example, we have increased “work_mem” a fairly small amount from 64kb to 64MB. In my mind you never want to increase “work_mem” to a setting where if all workers were being worked by CPU, you could overrun the free memory on the system. Also, remember that there is some overhead to maintaining that memory so it’s really important to find a good balance for your workload. Keep in mind that you can set this parameter at the server level, as an alter in the query text or at the user level as a profile.

Enjoy!

Three Configuration Parameters for PostgreSQL That Are Worth Further Investigation!

In my new role at Google, not only am I still working with lots of Oracle and replication tools, I am also expanding more into moving Oracle systems to Google Cloud on either CloudSQL for PostgreSQL or AlloyDB for PostgreSQL. After you have been looking at the systems for a little bit of time, there seem to be a few things worth tweaking from the out of the box values. It is my goal to discuss some of those things now and in future blog posts.

Let me start off by saying managed PostgreSQL CloudSQL products such as Google’s CloudSQL for PostgreSQL and AlloyDB for PostgreSQL (in Preview as of this post) are designed to be low maintenance and fit many different types of workloads. That being said, there are a few configuration parameters that you should really look at tuning as the defaults (as of PostgreSQL version 14) in most cases are just not set to the most efficient value if your workload is anything more than a VERY light workload.

work_mem

Sets the base maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files and the default value is four megabytes (4MB). People coming from the Oracle world will equate this setting with PGA, however you must keep in mind that the implementation is “private” memory in PostgreSQL while it is “shared” memory in Oracle. You must take care not to over configure this setting in PostgreSQL.

A full description of the parameter can be found here.

random_page_cost

Sets the planner’s estimate of the cost of a non-sequentially-fetched disk page and the default is 4.0. In reality this setting is good for a system in which disk performance is a concern (i.e a system with HDD vs SSDs) as it is assumed that random disk access is 40x slower than sequential access. Essentially if you want your system to prefer index and cache reads, lower this number from the default, but to no lower than the setting for seq_page_cost. For normal CloudSQL for PostgreSQL deployments that use SSD, I like to set this to 2. In deployments which utilize AlloyDB for PostgreSQL an even lower setting of 1.1 can be used due to the efficient Colossus Storage implementation.

For those that have been around Oracle for a while, this parameter behaves much like the “optimizer_index_cost_adj” parameter.

A full description of the parameter can be found here.

effective_io_concurrency

Sets the number of concurrent disk I/O operations that PostgreSQL expects can be executed simultaneously. Raising this value will increase the number of I/O operations that any individual PostgreSQL session attempts to initiate in parallel. The default is 1 and at this point this setting only effects bitmap heap scans. That being said, bitmap heap scans, while efficient, by nature have to look at the index as well as a corresponding heap block and if that data has to be read from disk and if your system can handle the parallelism like when you use SSD storage, you should increase this to a more meaningful value. I will do a separate blog post to show the effects of this, but in general as this number is increased beyond 1/2 the number of CPUs available, greater diminishing returns are observed.

A full description of the parameter can be found here.

In closing, just like Oracle and other RDBMSs, there are numerous configuration parameters all which can have effects on the workload. However, the above three parameters are the ones I most often find that have opportunities for optimization, especially on more modern platforms.

In future posts I will detail how each one of these can change a workload.

Enjoy!