Tag Archives: effective_io_concurrency

Tuning the PostgreSQL “effective_io_concurrency” Parameter

In my previous post “Three Configuration Parameters for PostgreSQL That Are Worth Further Investigation!“, I introduced three PostgreSQL parameters that I feel are the most “neglected” and present the most opportunity for performance tuning in a PostgreSQL instance. I’ve previously discussed “work_mem” in a previous post, so in Part 2 of this series, I would like to demonstrate tuning the “effective_io_concurrency” parameter. While this parameter has been discussed in other blogs, I will attempt to make this discussion relevant to how it might affect a CloudSQL instance.

The parameter “effective_io_concurrency” reflects the number of simultaneous requests that can be handled efficiently by the disk subsystem. One thing to keep in mind is that currently this parameter only effects “bitmap_heap_scans” where the data is not already present in the shared buffer. In general, if using spinning HDD devices, this should be set to reflect the number of drives that participate in the RAID stripe. In cases where SSDs are used, you can set this value much higher, although you must take into account any Quality of Service I/O ops limits which are usually present in a cloud implementation. A full explanation of the parameter can be found here.

To do a simple demonstration of how this parameter can effect queries, I set up a small Google CloudSQL for Postgres instance (2 vCPU X 8GB memory) and loaded up some tables, then executed a query that ensured a “bitmap heap scan” changing “effective_io_concurrency” parameter between each test. In addition, the instance was bounced before each test to ensure that the shared buffers were cleared.

Setup:

CREATE TABLE public.effective_io_concurrency_test (
		id int PRIMARY KEY, 
		value numeric,
		product_id int,
		effective_date timestamp(3)
		);
INSERT INTO public.effective_io_concurrency_test VALUES ( 
		generate_series(0,100000000), 
		random()*1000,
		random()*100,
		current_timestamp(3));

CREATE INDEX prod_id_idx ON public.effective_io_concurrency_test (product_id);
VACUUM ANALYZE public.effective_io_concurrency_test;

Execution:

The resulting query plans did not show any variation in execution path or cost, but the timings did vary across the tests.

EXPLAIN (analyze, verbose, costs, settings, buffers, wal, timing, summary, format text)
SELECT * FROM public.effective_io_concurrency_test
WHERE id BETWEEN 10000 AND 100000;

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on public.effective_io_concurrency_test  (cost=7035.15..522009.95 rows=547023 width=27) (actual time=293.542..33257.631 rows=588784 loops=1)
   Output: id, value, product_id, effective_date
   Recheck Cond: (((effective_io_concurrency_test.id >= 10000) AND (effective_io_concurrency_test.id = 100) AND (effective_io_concurrency_test.product_id   BitmapOr  (cost=7035.15..7035.15 rows=547450 width=0) (actual time=156.951..156.954 rows=0 loops=1)
         Buffers: shared hit=6 read=668
         I/O Timings: read=24.501
         ->  Bitmap Index Scan on effective_io_concurrency_test_pkey  (cost=0.00..1459.74 rows=94117 width=0) (actual time=14.908..14.908 rows=90001 loops=1)
               Index Cond: ((effective_io_concurrency_test.id >= 10000) AND (effective_io_concurrency_test.id   Bitmap Index Scan on prod_id_idx  (cost=0.00..5301.90 rows=453333 width=0) (actual time=142.040..142.040 rows=499255 loops=1)
               Index Cond: ((effective_io_concurrency_test.product_id >= 100) AND (effective_io_concurrency_test.product_id <= 200))
               Buffers: shared hit=3 read=421
               I/O Timings: read=14.154
 Settings: effective_cache_size = '3259448kB', effective_io_concurrency = '8', random_page_cost = '2', work_mem = '512MB'
 Query Identifier: -8974663893066369302
 Planning:
   Buffers: shared hit=103 read=17
   I/O Timings: read=26.880
 Planning Time: 28.350 ms
 Execution Time: 33322.389 ms

Summary:

effective_io_concurrencyQuery Time
1 /* CloudSQL Default */194708.918 ms
2 /* Equal number of CPU */107953.205 ms
4 /* 2x number of CPU */58161.010 ms
8 /* 4x number of CPU */33322.389 ms
10 /* 5x number of CPU */30118.593 ms
20 /* 6x number of CPU */28758.106 ms

As you can see, there is a diminishing return as we increased the parameter, but why? Upon looking at Google Cloud Console “System Insights” the reason was clear.

**** One thing to note, is that the CPU Utilization spike is a result of the shutdown and restart of the instance between each test. The utilization following the spike represents the utilization found during the test itself.

The Conclusion:

While CPU utilization didn’t hit any limit, the IOPS limits for that CloudSQL shape did. You can add IOPS by changing the shape, but the point of this was to show that the optimal setting always depends on your workload and instance shape. In this case and for this CloudSQL shape, you might actually want to choose a setting of “4” which represents a setting of 2x the number of CPU and one that doesn’t quite max out the guaranteed IOPS. The setting doesn’t get you the fastest query time, but does leave resources left over for other queries to execute at the same time.

As always, be sure to test any changed in your own system and balance accordingly because your “mileage may vary” depending on your individual situation. That being said, in almost no cases is the default setting acceptable unless you are running HDD or on an OS which lacks “posix_fadvise” function (like MacOS or Solaris).

Enjoy!

Three Configuration Parameters for PostgreSQL That Are Worth Further Investigation!

In my new role at Google, not only am I still working with lots of Oracle and replication tools, I am also expanding more into moving Oracle systems to Google Cloud on either CloudSQL for PostgreSQL or AlloyDB for PostgreSQL. After you have been looking at the systems for a little bit of time, there seem to be a few things worth tweaking from the out of the box values. It is my goal to discuss some of those things now and in future blog posts.

Let me start off by saying managed PostgreSQL CloudSQL products such as Google’s CloudSQL for PostgreSQL and AlloyDB for PostgreSQL (in Preview as of this post) are designed to be low maintenance and fit many different types of workloads. That being said, there are a few configuration parameters that you should really look at tuning as the defaults (as of PostgreSQL version 14) in most cases are just not set to the most efficient value if your workload is anything more than a VERY light workload.

work_mem

Sets the base maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files and the default value is four megabytes (4MB). People coming from the Oracle world will equate this setting with PGA, however you must keep in mind that the implementation is “private” memory in PostgreSQL while it is “shared” memory in Oracle. You must take care not to over configure this setting in PostgreSQL.

A full description of the parameter can be found here.

random_page_cost

Sets the planner’s estimate of the cost of a non-sequentially-fetched disk page and the default is 4.0. In reality this setting is good for a system in which disk performance is a concern (i.e a system with HDD vs SSDs) as it is assumed that random disk access is 40x slower than sequential access. Essentially if you want your system to prefer index and cache reads, lower this number from the default, but to no lower than the setting for seq_page_cost. For normal CloudSQL for PostgreSQL deployments that use SSD, I like to set this to 2. In deployments which utilize AlloyDB for PostgreSQL an even lower setting of 1.1 can be used due to the efficient Colossus Storage implementation.

For those that have been around Oracle for a while, this parameter behaves much like the “optimizer_index_cost_adj” parameter.

A full description of the parameter can be found here.

effective_io_concurrency

Sets the number of concurrent disk I/O operations that PostgreSQL expects can be executed simultaneously. Raising this value will increase the number of I/O operations that any individual PostgreSQL session attempts to initiate in parallel. The default is 1 and at this point this setting only effects bitmap heap scans. That being said, bitmap heap scans, while efficient, by nature have to look at the index as well as a corresponding heap block and if that data has to be read from disk and if your system can handle the parallelism like when you use SSD storage, you should increase this to a more meaningful value. I will do a separate blog post to show the effects of this, but in general as this number is increased beyond 1/2 the number of CPUs available, greater diminishing returns are observed.

A full description of the parameter can be found here.

In closing, just like Oracle and other RDBMSs, there are numerous configuration parameters all which can have effects on the workload. However, the above three parameters are the ones I most often find that have opportunities for optimization, especially on more modern platforms.

In future posts I will detail how each one of these can change a workload.

Enjoy!