Yearly Archives: 2022

Tuning the PostgreSQL “effective_io_concurrency” Parameter

In my previous post “Three Configuration Parameters for PostgreSQL That Are Worth Further Investigation!“, I introduced three PostgreSQL parameters that I feel are the most “neglected” and present the most opportunity for performance tuning in a PostgreSQL instance. I’ve previously discussed “work_mem” in a previous post, so in Part 2 of this series, I would like to demonstrate tuning the “effective_io_concurrency” parameter. While this parameter has been discussed in other blogs, I will attempt to make this discussion relevant to how it might affect a CloudSQL instance.

The parameter “effective_io_concurrency” reflects the number of simultaneous requests that can be handled efficiently by the disk subsystem. One thing to keep in mind is that currently this parameter only effects “bitmap_heap_scans” where the data is not already present in the shared buffer. In general, if using spinning HDD devices, this should be set to reflect the number of drives that participate in the RAID stripe. In cases where SSDs are used, you can set this value much higher, although you must take into account any Quality of Service I/O ops limits which are usually present in a cloud implementation. A full explanation of the parameter can be found here.

To do a simple demonstration of how this parameter can effect queries, I set up a small Google CloudSQL for Postgres instance (2 vCPU X 8GB memory) and loaded up some tables, then executed a query that ensured a “bitmap heap scan” changing “effective_io_concurrency” parameter between each test. In addition, the instance was bounced before each test to ensure that the shared buffers were cleared.

Setup:

CREATE TABLE public.effective_io_concurrency_test (
		id int PRIMARY KEY, 
		value numeric,
		product_id int,
		effective_date timestamp(3)
		);
INSERT INTO public.effective_io_concurrency_test VALUES ( 
		generate_series(0,100000000), 
		random()*1000,
		random()*100,
		current_timestamp(3));

CREATE INDEX prod_id_idx ON public.effective_io_concurrency_test (product_id);
VACUUM ANALYZE public.effective_io_concurrency_test;

Execution:

The resulting query plans did not show any variation in execution path or cost, but the timings did vary across the tests.

EXPLAIN (analyze, verbose, costs, settings, buffers, wal, timing, summary, format text)
SELECT * FROM public.effective_io_concurrency_test
WHERE id BETWEEN 10000 AND 100000;

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on public.effective_io_concurrency_test  (cost=7035.15..522009.95 rows=547023 width=27) (actual time=293.542..33257.631 rows=588784 loops=1)
   Output: id, value, product_id, effective_date
   Recheck Cond: (((effective_io_concurrency_test.id >= 10000) AND (effective_io_concurrency_test.id = 100) AND (effective_io_concurrency_test.product_id   BitmapOr  (cost=7035.15..7035.15 rows=547450 width=0) (actual time=156.951..156.954 rows=0 loops=1)
         Buffers: shared hit=6 read=668
         I/O Timings: read=24.501
         ->  Bitmap Index Scan on effective_io_concurrency_test_pkey  (cost=0.00..1459.74 rows=94117 width=0) (actual time=14.908..14.908 rows=90001 loops=1)
               Index Cond: ((effective_io_concurrency_test.id >= 10000) AND (effective_io_concurrency_test.id   Bitmap Index Scan on prod_id_idx  (cost=0.00..5301.90 rows=453333 width=0) (actual time=142.040..142.040 rows=499255 loops=1)
               Index Cond: ((effective_io_concurrency_test.product_id >= 100) AND (effective_io_concurrency_test.product_id <= 200))
               Buffers: shared hit=3 read=421
               I/O Timings: read=14.154
 Settings: effective_cache_size = '3259448kB', effective_io_concurrency = '8', random_page_cost = '2', work_mem = '512MB'
 Query Identifier: -8974663893066369302
 Planning:
   Buffers: shared hit=103 read=17
   I/O Timings: read=26.880
 Planning Time: 28.350 ms
 Execution Time: 33322.389 ms

Summary:

effective_io_concurrencyQuery Time
1 /* CloudSQL Default */194708.918 ms
2 /* Equal number of CPU */107953.205 ms
4 /* 2x number of CPU */58161.010 ms
8 /* 4x number of CPU */33322.389 ms
10 /* 5x number of CPU */30118.593 ms
20 /* 6x number of CPU */28758.106 ms

As you can see, there is a diminishing return as we increased the parameter, but why? Upon looking at Google Cloud Console “System Insights” the reason was clear.

**** One thing to note, is that the CPU Utilization spike is a result of the shutdown and restart of the instance between each test. The utilization following the spike represents the utilization found during the test itself.

The Conclusion:

While CPU utilization didn’t hit any limit, the IOPS limits for that CloudSQL shape did. You can add IOPS by changing the shape, but the point of this was to show that the optimal setting always depends on your workload and instance shape. In this case and for this CloudSQL shape, you might actually want to choose a setting of “4” which represents a setting of 2x the number of CPU and one that doesn’t quite max out the guaranteed IOPS. The setting doesn’t get you the fastest query time, but does leave resources left over for other queries to execute at the same time.

As always, be sure to test any changed in your own system and balance accordingly because your “mileage may vary” depending on your individual situation. That being said, in almost no cases is the default setting acceptable unless you are running HDD or on an OS which lacks “posix_fadvise” function (like MacOS or Solaris).

Enjoy!

Tuning the PostgreSQL “work_mem” Parameter

In my previous post “Three Configuration Parameters for PostgreSQL That Are Worth Further Investigation!“, I introduced three PostgreSQL parameters that I feel are the most “neglected” and present the most opportunity for performance tuning in a PostgreSQL instance. In the first of what will become a three part series, I would like to demonstrate tuning the “work_mem” parameter.

The “work_mem” parameter optimizes database operations such as:

  • sorts
  • bitmap heap scans
  • hash joins
  • materialized common table expressions (WITH statements)

To get started, lets create a test table with 100M rows of data:

CREATE TABLE public.work_mem_test (
		id int PRIMARY KEY, 
		value numeric,
		product_id int,
		effective_date timestamp(3)
		);
INSERT INTO public.work_mem_test VALUES ( 
		generate_series(0,100000000), 
		random()*1000,
		random()*100,
		current_timestamp(3));

CREATE INDEX prod_value_idx ON public.work_mem_test (product_id);
VACUUM ANALYZE public.work_mem_test;

We will then run an explain analyze with the “COSTS, BUFFERS, VERBOSE” options so that we can fully see what is going on with the query. For demonstration purposes, I have set the “work_mem” to the lowest possible setting of 64kB. In addition, so that we don’t get the variability of parallel processing I have set the “max_parallel_workers_per_gather” to zero to disable parallel processing. Most systems may also experience better gains than this test case as this was a very small 2 vCPU / 8GB Google CloudSQL PostgreSQL instance:

set session work_mem to '64kB';
set max_parallel_workers_per_gather = 0;
set effective_io_concurrency = 20;
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
SELECT * FROM public.work_mem_test
WHERE id BETWEEN 10000 AND 100000
OR product_id BETWEEN 100 AND 200
ORDER BY value NULLS FIRST;

---------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=2640424.97..2641959.63 rows=613866 width=27) (actual time=16593.228..16778.132 rows=589379 loops=1)
   Output: id, value, product_id, effective_date
   Sort Key: work_mem_test.value NULLS FIRST
   Sort Method: external merge  Disk: 24248kB
   Buffers: shared hit=6 read=363702, temp read=15340 written=16486
   I/O Timings: read=4120.104
   ->  Bitmap Heap Scan on public.work_mem_test  (cost=7805.12..2539439.23 rows=613866 width=27) (actual time=82.454..15413.822 rows=589379 loops=1)
         Output: id, value, product_id, effective_date
         Recheck Cond: (((work_mem_test.id >= 10000) AND (work_mem_test.id <= 100000)) OR ((work_mem_test.product_id >= 100) AND (work_mem_test.product_id <= 200)))
         Rows Removed by Index Recheck: 48506004
         Heap Blocks: exact=2058 lossy=360975
         Buffers: shared hit=6 read=363702
         I/O Timings: read=4120.104
         ->  BitmapOr  (cost=7805.12..7805.12 rows=614306 width=0) (actual time=80.340..80.342 rows=0 loops=1)
               Buffers: shared hit=6 read=669
               I/O Timings: read=17.683
               ->  Bitmap Index Scan on work_mem_test_pkey  (cost=0.00..1280.95 rows=82638 width=0) (actual time=12.680..12.680 rows=90001 loops=1)
                     Index Cond: ((work_mem_test.id >= 10000) AND (work_mem_test.id <= 100000))
                     Buffers: shared hit=3 read=247
                     I/O Timings: read=7.831
               ->  Bitmap Index Scan on prod_value_idx  (cost=0.00..6217.24 rows=531667 width=0) (actual time=67.657..67.657 rows=499851 loops=1)
                     Index Cond: ((work_mem_test.product_id >= 100) AND (work_mem_test.product_id <= 200))
                     Buffers: shared hit=3 read=422
                     I/O Timings: read=9.852
 Query Identifier: 731698457235411789
 Planning:
   Buffers: shared hit=39 read=2
   I/O Timings: read=2.588
 Planning Time: 3.136 ms
 Execution Time: 16811.970 ms
(30 rows)

Time: 16903.784 ms (00:16.904)

EXPLAIN, via the BUFFERS keyword gives us the following data points:

Rows Removed by Index Recheck: 48506004
Heap Blocks: exact=2058 lossy=360975


Execution Time: 16811.970 ms

This essentially means that the 64kB of work_mem can hold 2058 blocks in the bitmap structure within that work_mem size. To get the remainder of the results, everything that falls out of that bitmap
are lossy blocks, meaning that they don’t point to an exact tuple, but to rather a block with many tuples. The recheck condition then checks that block for the tuples the query is looking for.

The following formula is a starting point, but may or may not give you the exact setting needed based on various factors. Since we used the lowest possible work_mem, the setting becomes a multiple of that:

new_mem_in_mbytes = 
 ((exact heap blocks + lossy heap blocks) / exact heap blocks) * work_mem_in_bytes / 1048576
= ceil(round(((2058 + 360975) / 2058) * 65536 / 1048576,1))
= 11MB

Note: In most cases, I have found that this formula has worked well on the first pass, however as you will see in the subsequent tests, this estimated work_mem setting wasn’t quite close to the actual amount needed and this is likely due to a mis-estimate by the planner

Reducing the Lossy Block Access

So for the next test I will increase the “work_mem” to 11MB and re-execute the test.

set session work_mem to '11MB';
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
SELECT * FROM public.work_mem_test
WHERE id BETWEEN 10000 AND 100000
OR product_id BETWEEN 100 AND 200
ORDER BY value NULLS FIRST;

---------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=2160340.77..2161875.43 rows=613866 width=27) (actual time=11382.002..11574.572 rows=589379 loops=1)
   Output: id, value, product_id, effective_date
   Sort Key: work_mem_test.value NULLS FIRST
   Sort Method: external merge  Disk: 24232kB
   Buffers: shared hit=23329 read=340379, temp read=3029 written=3034
   I/O Timings: read=3618.302
   ->  Bitmap Heap Scan on public.work_mem_test  (cost=7805.12..2090832.53 rows=613866 width=27) (actual time=185.251..10923.764 rows=589379 loops=1)
         Output: id, value, product_id, effective_date
         Recheck Cond: (((work_mem_test.id >= 10000) AND (work_mem_test.id = 100) AND (work_mem_test.product_id   BitmapOr  (cost=7805.12..7805.12 rows=614306 width=0) (actual time=132.954..132.957 rows=0 loops=1)
               Buffers: shared hit=675
               ->  Bitmap Index Scan on work_mem_test_pkey  (cost=0.00..1280.95 rows=82638 width=0) (actual time=4.449..4.450 rows=90001 loops=1)
                     Index Cond: ((work_mem_test.id >= 10000) AND (work_mem_test.id   Bitmap Index Scan on prod_value_idx  (cost=0.00..6217.24 rows=531667 width=0) (actual time=128.503..128.503 rows=499851 loops=1)
                     Index Cond: ((work_mem_test.product_id >= 100) AND (work_mem_test.product_id <= 200))
                     Buffers: shared hit=425
 Query Identifier: 731698457235411789
 Planning:
   Buffers: shared hit=30
 Planning Time: 0.179 ms
 Execution Time: 11611.071 ms
(26 rows)

Time: 11695.952 ms (00:11.696)

With 11MB, we got more exact heap blocks, but still not enough memory to process. Applying the formula based on the execution plan of the query….

new_mem_in_mbytes = 
 ((exact heap blocks + lossy heap blocks) / exact heap blocks) * work_mem_in_bytes / 1048576
= ceil(round(((164090 + 198943) / 164090) * 12582912 / 1048576,1));
= 24MB


Let’s increase just a little bit more to 24MB as the latest iteration of the formula has suggested.

set session work_mem to '24MB';
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
SELECT * FROM public.work_mem_test
WHERE id BETWEEN 10000 AND 100000
OR product_id BETWEEN 100 AND 200
ORDER BY value NULLS FIRST;

---------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=1709385.33..1710919.99 rows=613866 width=27) (actual time=3651.589..3791.250 rows=589379 loops=1)
   Output: id, value, product_id, effective_date
   Sort Key: work_mem_test.value NULLS FIRST
   Sort Method: external merge  Disk: 24232kB
   Buffers: shared hit=23329 read=340379, temp read=3029 written=3031
   I/O Timings: read=1493.162
   ->  Bitmap Heap Scan on public.work_mem_test  (cost=7805.12..1639877.09 rows=613866 width=27) (actual time=348.261..3201.421 rows=589379 loops=1)
         Output: id, value, product_id, effective_date
         Recheck Cond: (((work_mem_test.id >= 10000) AND (work_mem_test.id = 100) AND (work_mem_test.product_id   BitmapOr  (cost=7805.12..7805.12 rows=614306 width=0) (actual time=188.309..188.311 rows=0 loops=1)
               Buffers: shared hit=675
               ->  Bitmap Index Scan on work_mem_test_pkey  (cost=0.00..1280.95 rows=82638 width=0) (actual time=5.090..5.091 rows=90001 loops=1)
                     Index Cond: ((work_mem_test.id >= 10000) AND (work_mem_test.id   Bitmap Index Scan on prod_value_idx  (cost=0.00..6217.24 rows=531667 width=0) (actual time=183.215..183.215 rows=499851 loops=1)
                     Index Cond: ((work_mem_test.product_id >= 100) AND (work_mem_test.product_id <= 200))
                     Buffers: shared hit=425
 Query Identifier: 731698457235411789
 Planning:
   Buffers: shared hit=30
 Planning Time: 0.242 ms
 Execution Time: 3828.215 ms
(25 rows)

Time: 3912.670 ms (00:03.913)

No more lossy block scans! Time has also reduced quite significantly from the first execution.

Handling the Sort Method

Now, we need to pay attention to the explain plan line:

 "Bitmap Heap Scan on public.work_mem_test….rows=613866 width=27" 
 "Sort Method: external merge Disk: 24232kB."

Some of the sort is in memory and some is spilled to disk. So in order to fit the entire rowset in memory, we must multiply the input rows by the width, which is 16MB. In addition, the planner spilled another 24MB to disk, so let’s add that also to “work_mem”.

16Mb + 24Mb which is being spilled = 40Mb more "work_mem"

So with the current “work_mem” of 24MB plus the additional computed to remove the sort (rounded up), the total needed is 64MB.

Lets run one more test:

set session work_mem to '64MB';
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
SELECT * FROM public.work_mem_test
WHERE id BETWEEN 10000 AND 100000
OR product_id BETWEEN 100 AND 200
ORDER BY value NULLS FIRST;

---------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=613087.41..614622.07 rows=613866 width=27) (actual time=3186.918..3309.896 rows=589379 loops=1)
   Output: id, value, product_id, effective_date
   Sort Key: work_mem_test.value NULLS FIRST
   Sort Method: quicksort  Memory: 61169kB
   Buffers: shared hit=6 read=363702
   I/O Timings: read=1306.892
   ->  Bitmap Heap Scan on public.work_mem_test  (cost=7805.12..554071.67 rows=613866 width=27) (actual time=245.348..2908.344 rows=589379 loops=1)
         Output: id, value, product_id, effective_date
         Recheck Cond: (((work_mem_test.id >= 10000) AND (work_mem_test.id = 100) AND (work_mem_test.product_id   BitmapOr  (cost=7805.12..7805.12 rows=614306 width=0) (actual time=115.051..115.053 rows=0 loops=1)
               Buffers: shared hit=6 read=669
               I/O Timings: read=3.561
               ->  Bitmap Index Scan on work_mem_test_pkey  (cost=0.00..1280.95 rows=82638 width=0) (actual time=6.160..6.161 rows=90001 loops=1)
                     Index Cond: ((work_mem_test.id >= 10000) AND (work_mem_test.id   Bitmap Index Scan on prod_value_idx  (cost=0.00..6217.24 rows=531667 width=0) (actual time=108.889..108.889 rows=499851 loops=1)
                     Index Cond: ((work_mem_test.product_id >= 100) AND (work_mem_test.product_id <= 200))
                     Buffers: shared hit=3 read=422
                     I/O Timings: read=2.231
 Query Identifier: 731698457235411789
 Planning:
   Buffers: shared hit=30
 Planning Time: 0.180 ms
 Execution Time: 3347.271 ms
(28 rows)

Time: 3431.188 ms (00:03.431)

With that adjustment, we have significantly increased the efficiency and performance of the query. From the beginning, just by tuning “work_mem”, we have shaved approximately 13.5 seconds of processing time!

What about a top-N Heapsort??

Now if we want to demonstrate a top-N Heapsort, we can change the query just a little bit more:

set session work_mem to '64MB';
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
SELECT * FROM public.work_mem_test
WHERE id BETWEEN 10000 AND 100000
OR product_id BETWEEN 100 AND 200
ORDER BY value NULLS FIRST LIMIT 10;

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=567337.09..567337.12 rows=10 width=27) (actual time=3021.185..3021.190 rows=10 loops=1)
   Output: id, value, product_id, effective_date
   Buffers: shared hit=6 read=363702
   I/O Timings: read=1313.044
   ->  Sort  (cost=567337.09..568871.76 rows=613866 width=27) (actual time=3021.183..3021.186 rows=10 loops=1)
         Output: id, value, product_id, effective_date
         Sort Key: work_mem_test.value NULLS FIRST
         Sort Method: top-N heapsort  Memory: 26kB
         Buffers: shared hit=6 read=363702
         I/O Timings: read=1313.044
         ->  Bitmap Heap Scan on public.work_mem_test  (cost=7805.12..554071.67 rows=613866 width=27) (actual time=235.909..2911.978 rows=589379 loops=1)
               Output: id, value, product_id, effective_date
               Recheck Cond: (((work_mem_test.id >= 10000) AND (work_mem_test.id = 100) AND (work_mem_test.product_id   BitmapOr  (cost=7805.12..7805.12 rows=614306 width=0) (actual time=108.429..108.431 rows=0 loops=1)
                     Buffers: shared hit=6 read=669
                     I/O Timings: read=3.114
                     ->  Bitmap Index Scan on work_mem_test_pkey  (cost=0.00..1280.95 rows=82638 width=0) (actual time=5.582..5.582 rows=90001 loops=1)
                           Index Cond: ((work_mem_test.id >= 10000) AND (work_mem_test.id   Bitmap Index Scan on prod_value_idx  (cost=0.00..6217.24 rows=531667 width=0) (actual time=102.845..102.845 rows=499851 loops=1)
                           Index Cond: ((work_mem_test.product_id >= 100) AND (work_mem_test.product_id <= 200))
                           Buffers: shared hit=3 read=422
                           I/O Timings: read=2.037
 Query Identifier: 4969544646514690020
 Planning:
   Buffers: shared hit=30
 Planning Time: 0.177 ms
 Execution Time: 3023.304 ms
(32 rows)

Time: 3107.421 ms (00:03.107)

Because we are only returning the top N rows, the memory used is not as high because a different sort methodology can be used. In addition, the time is further reduced.

As you can see with, a little tuning of the “work_mem” parameter, lots of performance can be gained in the system. In this example, we have increased “work_mem” a fairly small amount from 64kb to 64MB. In my mind you never want to increase “work_mem” to a setting where if all workers were being worked by CPU, you could overrun the free memory on the system. Also, remember that there is some overhead to maintaining that memory so it’s really important to find a good balance for your workload. Keep in mind that you can set this parameter at the server level, as an alter in the query text or at the user level as a profile.

Enjoy!

Three Configuration Parameters for PostgreSQL That Are Worth Further Investigation!

In my new role at Google, not only am I still working with lots of Oracle and replication tools, I am also expanding more into moving Oracle systems to Google Cloud on either CloudSQL for PostgreSQL or AlloyDB for PostgreSQL. After you have been looking at the systems for a little bit of time, there seem to be a few things worth tweaking from the out of the box values. It is my goal to discuss some of those things now and in future blog posts.

Let me start off by saying managed PostgreSQL CloudSQL products such as Google’s CloudSQL for PostgreSQL and AlloyDB for PostgreSQL (in Preview as of this post) are designed to be low maintenance and fit many different types of workloads. That being said, there are a few configuration parameters that you should really look at tuning as the defaults (as of PostgreSQL version 14) in most cases are just not set to the most efficient value if your workload is anything more than a VERY light workload.

work_mem

Sets the base maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files and the default value is four megabytes (4MB). People coming from the Oracle world will equate this setting with PGA, however you must keep in mind that the implementation is “private” memory in PostgreSQL while it is “shared” memory in Oracle. You must take care not to over configure this setting in PostgreSQL.

A full description of the parameter can be found here.

random_page_cost

Sets the planner’s estimate of the cost of a non-sequentially-fetched disk page and the default is 4.0. In reality this setting is good for a system in which disk performance is a concern (i.e a system with HDD vs SSDs) as it is assumed that random disk access is 40x slower than sequential access. Essentially if you want your system to prefer index and cache reads, lower this number from the default, but to no lower than the setting for seq_page_cost. For normal CloudSQL for PostgreSQL deployments that use SSD, I like to set this to 2. In deployments which utilize AlloyDB for PostgreSQL an even lower setting of 1.1 can be used due to the efficient Colossus Storage implementation.

For those that have been around Oracle for a while, this parameter behaves much like the “optimizer_index_cost_adj” parameter.

A full description of the parameter can be found here.

effective_io_concurrency

Sets the number of concurrent disk I/O operations that PostgreSQL expects can be executed simultaneously. Raising this value will increase the number of I/O operations that any individual PostgreSQL session attempts to initiate in parallel. The default is 1 and at this point this setting only effects bitmap heap scans. That being said, bitmap heap scans, while efficient, by nature have to look at the index as well as a corresponding heap block and if that data has to be read from disk and if your system can handle the parallelism like when you use SSD storage, you should increase this to a more meaningful value. I will do a separate blog post to show the effects of this, but in general as this number is increased beyond 1/2 the number of CPUs available, greater diminishing returns are observed.

A full description of the parameter can be found here.

In closing, just like Oracle and other RDBMSs, there are numerous configuration parameters all which can have effects on the workload. However, the above three parameters are the ones I most often find that have opportunities for optimization, especially on more modern platforms.

In future posts I will detail how each one of these can change a workload.

Enjoy!

Indexes Can Check-Out Anytime, But Can They Leave?

When I think about indexes in any DBMS, I immediately think about the lyrics in the Eagles song “Hotel California”.

“You can check-out any time you like, but you can never leave…..”

Almost every system that has been around for a while is over indexed. Every index costs something to maintain and some more than others. Also, systems and code change. There could be an index that was once needed, but over time the code shifts to use another one thats more efficient instead. Guess what though, you are still maintaining that old index. That costs CPU, I/O which translates into TIME.

Thankfully in Oracle 12.2, they have introduced a feature called INDEX USAGE TRACKING. In older versions this was called INDEX MONITORING and it was much harder and invasive to use. In the newer versions, it just works and you can read more about both options here:

https://oracle-base.com/articles/12c/index-usage-tracking-12cr2

By default, indexes in the views “DBA_INDEX_USAGE” and “V$INDEX_USAGE_INFO”. The “V$” table is flushed to the “DBA” table every 15 minutes. For most purposes, using the “DBA” table is just fine. The thing to remember is that querying only these tables tell you the indexes that are being USED, not the ones that are NOT USED. To get the ones that are not used, we just need to expand the query a bit:

SELECT
    di.owner,
    di.index_name,
    di.index_type,
    di.table_name
FROM
    dba_indexes di
WHERE
             di.owner in (select username from dba_users where oracle_maintained = 'N')
    AND di.uniqueness = 'NONUNIQUE'
    AND NOT EXISTS (
        SELECT
            1
        FROM
            dba_index_usage iu
        WHERE
                iu.owner = di.owner
            AND iu.name = di.index_name
    )
GROUP BY
    di.owner,
    di.index_name,
    di.index_type,
    di.table_name
ORDER BY
    di.owner,
    di.table_name,
    di.index_name;

Now that we have the indexes that are not used, which ones have the biggest impact on database operations? We just expand the query again to leverage the “DBA_TAB_MODIFICATIONS” table. This will allow us to see how many operations occur on each table that the index sits on:

SELECT
    di.owner,
    di.index_name,
    di.index_type,
    di.table_name,
    SUM(tm.inserts + tm.updates + tm.deletes) operations
FROM
    dba_indexes di,
    dba_tab_modifications tm
WHERE
             di.owner in (select username from dba_users where oracle_maintained = 'N')
    AND di.owner = tm.table_owner
    AND di.table_name = tm.table_name
    AND di.uniqueness = 'NONUNIQUE'
    AND NOT EXISTS (
        SELECT
            1
        FROM
            dba_index_usage iu
        WHERE
                iu.owner = di.owner
            AND iu.name = di.index_name
    )
GROUP BY
    di.owner,
    di.index_name,
    di.index_type,
    di.table_name
ORDER BY
    SUM(tm.inserts + tm.updates + tm.deletes) ASC NULLS FIRST,
    di.owner,
    di.table_name,
    di.index_name;

Now, I have done an equi-join condition against the DBA_TAB_MODIFICATIONS so this is only showing us unused indexes against tables that the system has recorded operations against. If you wanted to show all indexes regardless if there were operations or not, then just change the join condition against DBA_TAB_MODIFICATIONS to be an outer join.

In my case, there were some very large indexes that were unused sitting on tables that had LOTS of operations (names changed to protect the innocent):

OWNERINDEX NAMEINDEX TYPETABLE NAMEOPERATIONS
HRFK_1_IDXNORMALEMPLOYEES295931545
HR1EMP_TOTFUNCTION-BASED NORMALEMP_HOURS1374288673
HR2EMP_TOTFUNCTION-BASED NORMALEMP_HOURS1425114200
HR3EMP_TOTFUNCTION-BASED NORMALEMP_HOURS2487284020

Adding “OPERATIONS” to the query now gives us some sense of the “BENEFIT” of dropping the index. Based on the results you can present your case for dropping the index. People always love to have the “BENEFIT” quantified and this is the closest thing you can do without performing a session trace which could prove to be invasive and fairly difficult to get the exact statement you are looking for.

Hopefully this will give you the additional information you need to advocate for dropping that index that is no longer being used.

SQLPlus is Expensive!


I was recently tasked to work on a server that had severe performance issues. The first thing I noticed was that the server was, at times, running at a load average 3-4x and under steady state about 1.5x the number of cores on the host! Further investigation, yielded that there was a large quantity of SQPlus invocations from a set of ETL scripts. Digging even more, I found that the script performed ETL across more than 100 schemas 75% of which were done via parallel background processes with a random “sleep” built in.

This is where I noticed something interesting. Instead of coding the script to invoke SQLPlus once, run a bunch of statements and exit, the coder constructed the script to invoke SQLPlus, run a statement, exit, invoke SQLPlus, run a statement, and on and on and on.

My first thought was that this has to be EXPENSIVE right?

When a SQLPlus login is made, a lot of things happen. Simply stated, the OS has to start a database process, and the database has to do a bunch of things to get your session ready (password authentication, memory allocation, update some tables with login details, setup permissions), etc, etc. Managing processes is one of the most expensive things an Operating System can do.

In terms of the database itself, no less than 19 individual statements are executed in order to prepare your session. Digging into those statements, probably one of the most expensive is:

sql_id:  9zg9qd9bm4spu
update user$ set spare6=DECODE(to_char(:2, 'YYYY-MM-DD'), '0000-00-00', to_date(NULL), :2) where user#=:1


It can be so expensive, Oracle has actually logged a bug (Bug 33121934  Library cache lock / load lock / mutex x during connection storm due to update user$) where you can discontinue updating the last login time for every login and do it every X number of seconds. The explain plan for this statement is actually quite lengthy and there is a lot more work going on than you think!

So based on all of that, I headed back to my scripts only to discover for every invocation of the script (the script is invoked for each of the 114 schemas), there were a minimum of 11 SQLPlus invocations and a max of of 27 invocations depending on the schema. So every minute that this script was executed there was a total of almost 1400 SQLPlus Invocations!!! Definitely a big contributor to the very high load average on the server.

As with anything in IT, change is tough and there’s hardly ever much appetite to change something that “works”. That is where it is important to always quantify the benefits. So to do this I wrote a quick script to show the difference in CPU Utilization for a script that does many SQLPlus invocations vs one that does one invocation and then calls many sql scripts.

The resultant stats are as follows. For the original script in the test system I used had one SQLPlus invocation for each sql file that was called, the following resources were used. The original script executed SQLPlus 61 times and yielded the following system usage:

real     0m0.876s
user     0m0.480s
sys      0m0.463s

The modified script that reduced the number of calls from 61 to 14 (77%) yielded the following system usage:

real     0m0.188s
user     0m0.117s
sys      0m0.085s

*** NOTE: The test system I was working on had no data to manipulate for ETL so most of the work was the SQLPlus call. Which is what I wanted to demonstrate anyway!

As you can see, reducing the number of SQLPlus calls by 77%, reduced system usage by:

  • real: 79%
  • user: 75%
  • sys: 82%

So with a small change to group sql statements into one SQLPlus call, we are able to drastically improve system stability and reduce resource utilization. A WIN in my book! Once this gets through the test cycles, I will follow up with a post on the improvement seen in production.