By default, the PostgreSQL JDBC driver fetches all rows at once and attempts to load them into memory vs. other drivers such as Oracle that by default only fetches 10 rows at a time. Both defaults have pros and cons, however in the context of the types of workloads I see every day, the PostgreSQL default is typically not optimal.
As a frame of reference, the default PostgreSQL fetch size is just fine if you have queries that always return small result sets. If there is a chance that larger results sets could be retrieved, a high variance of performance between small and large result sets will be seen and an explicit fetch size should be considered.
To demonstrate, I wanted to create a demo application which would create a simulated table with 2 million rows and select them with various fetch sizes:
- fetchSize of 1000 (stream 1000 rows at a time)
- fetchSize of 5000 (stream 5000 rows at a time)
- fetchSize of 0 (fetch all rows at once) – Default
For a query returning 2 million rows, leveraging the default fetch size produce the following results:
java -cp .:/home/shaneborden_google_com/java/postgresql-42.5.4.jar DatabaseFetchSizeTest
--- Database Setup ---
[MEMORY] Initial Baseline: 6.68 MB Used (Total Heap: 56.00 MB)
Existing table dropped.
New table created: large_data_test
Inserting 2000000 rows... Done in 44.36 seconds.
[MEMORY] After Data Insertion: 6.72 MB Used (Total Heap: 40.00 MB)
------------------------------------------------------------
--- Running Test: Small Chunk (1000 rows) (Fetch Size: 1000) ---
------------------------------------------------------------
Executing query with fetch size 1000...
[MEMORY] 1. Before Query Execution: 6.63 MB Used (Total Heap: 40.00 MB)
[MEMORY] 2. After Query Execution (Data Loaded/Cursor Open): 6.86 MB Used (Total Heap: 40.00 MB)
Test Complete.
Total Rows Read: 2000000
Total Time Taken: 1613 ms
[MEMORY] 3. After All Rows Processed: 6.67 MB Used (Total Heap: 68.00 MB)
Mode: STREAMING. Expect memory usage to remain low and flat across stages 1-3.
------------------------------------------------------------
--- Running Test: Optimal Chunk (5000 rows) (Fetch Size: 5000) ---
------------------------------------------------------------
Executing query with fetch size 5000...
[MEMORY] 1. Before Query Execution: 6.67 MB Used (Total Heap: 40.00 MB)
[MEMORY] 2. After Query Execution (Data Loaded/Cursor Open): 7.76 MB Used (Total Heap: 40.00 MB)
Test Complete.
Total Rows Read: 2000000
Total Time Taken: 1104 ms
[MEMORY] 3. After All Rows Processed: 6.67 MB Used (Total Heap: 56.00 MB)
Mode: STREAMING. Expect memory usage to remain low and flat across stages 1-3.
------------------------------------------------------------
--- Running Test: Default (0 - all at once) (Fetch Size: 0) ---
------------------------------------------------------------
Executing query with fetch size 0...
[MEMORY] 1. Before Query Execution: 6.67 MB Used (Total Heap: 40.00 MB)
[MEMORY] 2. After Query Execution (Data Loaded/Cursor Open): 454.03 MB Used (Total Heap: 780.00 MB)
Test Complete.
Total Rows Read: 2000000
Total Time Taken: 1480 ms
[MEMORY] 3. After All Rows Processed: 454.03 MB Used (Total Heap: 1560.00 MB)
Mode: ALL-AT-ONCE. Expect a large memory spike at stage 2.
- fetchSize of 1000 (stream 1000 rows at a time) – Total Time: 1613ms
- fetchSize of 5000 (stream 5000 rows at a time) – Total Time: 1104ms
- fetchSize of 0 (fetch all rows at once) – Total Time: 1480ms
So in the above example, just setting the fetch size down from unlimited to 5000 yielded a performance improvement of just over 25%!
If you had a mixture of result set sizes, this second test shows the result of the fetch size setting when smaller result sets are expected. As demonstrated with a 2000 row test, the variance is still exists, but not as dramatic so setting the fetch size at 5000 for most queries would be acceptable:
java -cp .:/home/shaneborden_google_com/java/postgresql-42.5.4.jar DatabaseFetchSizeTest
--- Database Setup ---
[MEMORY] Initial Baseline: 6.68 MB Used (Total Heap: 68.00 MB)
Existing table dropped.
New table created: large_data_test
Inserting 2000 rows... Done in 0.15 seconds.
[MEMORY] After Data Insertion: 6.60 MB Used (Total Heap: 40.00 MB)
------------------------------------------------------------
--- Running Test: Small Chunk (1000 rows) (Fetch Size: 1000) ---
------------------------------------------------------------
Executing query with fetch size 1000...
[MEMORY] 1. Before Query Execution: 6.60 MB Used (Total Heap: 40.00 MB)
[MEMORY] 2. After Query Execution (Data Loaded/Cursor Open): 6.83 MB Used (Total Heap: 40.00 MB)
Test Complete.
Total Rows Read: 2000
Total Time Taken: 41 ms
[MEMORY] 3. After All Rows Processed: 6.63 MB Used (Total Heap: 40.00 MB)
Mode: STREAMING. Expect memory usage to remain low and flat across stages 1-3.
------------------------------------------------------------
--- Running Test: Optimal Chunk (5000 rows) (Fetch Size: 5000) ---
------------------------------------------------------------
Executing query with fetch size 5000...
[MEMORY] 1. Before Query Execution: 6.63 MB Used (Total Heap: 40.00 MB)
[MEMORY] 2. After Query Execution (Data Loaded/Cursor Open): 7.07 MB Used (Total Heap: 40.00 MB)
Test Complete.
Total Rows Read: 2000
Total Time Taken: 28 ms
[MEMORY] 3. After All Rows Processed: 7.07 MB Used (Total Heap: 40.00 MB)
Mode: STREAMING. Expect memory usage to remain low and flat across stages 1-3.
------------------------------------------------------------
--- Running Test: Default (0 - all at once) (Fetch Size: 0) ---
------------------------------------------------------------
Executing query with fetch size 0...
[MEMORY] 1. Before Query Execution: 6.63 MB Used (Total Heap: 40.00 MB)
[MEMORY] 2. After Query Execution (Data Loaded/Cursor Open): 7.07 MB Used (Total Heap: 40.00 MB)
Test Complete.
Total Rows Read: 2000
Total Time Taken: 36 ms
[MEMORY] 3. After All Rows Processed: 7.08 MB Used (Total Heap: 40.00 MB)
Mode: ALL-AT-ONCE. Expect a large memory spike at stage 2.
- fetchSize of 1000 (stream 1000 rows at a time) – Total Time: 41ms
- fetchSize of 5000 (stream 5000 rows at a time) – Total Time: 28ms
- fetchSize of 0 (fetch all rows at once) – Total Time: 36ms
So in the above example, using a fetch size of 5000 even for smaller result sets still yielded a 22% improvement!
One thing to keep in mind is that depending on the width of your row, the fetch size may need to be tuned yet again. A network packet is only so large and in cases of very wide rows, a smaller fetch size may be more beneficial.
Check out changing the fetchSize with your application!
Code Appendix
DatabaseFetchSizeTest.java
DatabaseFetchSizeTest.java
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.Properties;
/**
* A Java application to demonstrate the effect of the JDBC fetchSize parameter
* on query performance when retrieving a large result set from PostgreSQL.
*
* The PostgreSQL JDBC driver, by default, fetches all rows at once. By setting
* a non-zero fetchSize, we enable cursor mode on the server, allowing the driver
* to retrieve results in chunks, which is crucial for handling large datasets
* without running out of memory on the client side.
*/
public class DatabaseFetchSizeTest {
// --- CONFIGURATION ---
private static final String DB_URL = "jdbc:postgresql://10.3.1.17:5432/postgres"; // Change 'your_database'
private static final String DB_USER = "postgres"; // Change this
private static final String DB_PASSWORD = "Google54321"; // Change this
private static final String TABLE_NAME = "large_data_test";
private static final int NUM_ROWS_TO_INSERT = 2000;
private static final int SMALL_FETCH_SIZE = 1000; // Small size for initial run
private static final int OPTIMAL_FETCH_SIZE = 5000; // A reasonable chunk size
private static final int DEFAULT_FETCH_SIZE = 0; // PostgreSQL default (fetches all rows at once)
public static void main(String[] args) {
// Ensure the PostgreSQL driver is loaded
try {
Class.forName("org.postgresql.Driver");
} catch (ClassNotFoundException e) {
System.err.println("PostgreSQL JDBC Driver not found. Please ensure the JAR is in your classpath.");
return;
}
// 1. Setup Environment
try (Connection conn = getConnection()) {
setupDatabase(conn);
// 2. Run Tests
// Running the 'default' last usually gives the best contrast in memory usage.
runTest(conn, SMALL_FETCH_SIZE, "Small Chunk (1000 rows)");
runTest(conn, OPTIMAL_FETCH_SIZE, "Optimal Chunk (5000 rows)");
runTest(conn, DEFAULT_FETCH_SIZE, "Default (0 - all at once)");
} catch (SQLException e) {
System.err.println("Database error occurred: " + e.getMessage());
e.printStackTrace();
}
}
/**
* Establishes a connection and sets auto-commit to false to allow streaming.
*/
private static Connection getConnection() throws SQLException {
Properties props = new Properties();
props.setProperty("user", DB_USER);
props.setProperty("password", DB_PASSWORD);
// Important: Set the auto-commit to false to enable server-side cursors (streaming)
Connection conn = DriverManager.getConnection(DB_URL, props);
conn.setAutoCommit(false);
return conn;
}
/**
* Creates the test table and populates it with a large number of rows.
*/
private static void setupDatabase(Connection conn) throws SQLException {
System.out.println("--- Database Setup ---");
displayMemory("Initial Baseline");
try (Statement stmt = conn.createStatement()) {
// Drop table if it exists
stmt.executeUpdate("DROP TABLE IF EXISTS " + TABLE_NAME);
System.out.println("Existing table dropped.");
// Create new table
stmt.executeUpdate("CREATE TABLE " + TABLE_NAME + " (id SERIAL PRIMARY KEY, data_value TEXT, timestamp TIMESTAMP)");
System.out.println("New table created: " + TABLE_NAME);
}
// Insert a large number of rows
String insertSQL = "INSERT INTO " + TABLE_NAME + " (data_value, timestamp) VALUES (?, NOW())";
try (PreparedStatement pstmt = conn.prepareStatement(insertSQL)) {
System.out.print("Inserting " + NUM_ROWS_TO_INSERT + " rows...");
long startTime = System.currentTimeMillis();
for (int i = 1; i <= NUM_ROWS_TO_INSERT; i++) {
// Each row has a string of decent length
pstmt.setString(1, "Data record number " + i + " - Padding to increase row size for better memory test.");
pstmt.addBatch();
if (i % 10000 == 0) { // Execute batch every 10,000 rows
pstmt.executeBatch();
conn.commit();
}
}
pstmt.executeBatch();
conn.commit();
long endTime = System.currentTimeMillis();
System.out.printf(" Done in %.2f seconds.\n", (endTime - startTime) / 1000.0);
}
displayMemory("After Data Insertion");
}
/**
* Helper method to calculate and display current used heap memory.
* @param stage A string label describing the point of measurement.
*/
private static void displayMemory(String stage) {
// Request garbage collection hint to get a cleaner reading
System.gc();
long totalMemory = Runtime.getRuntime().totalMemory();
long freeMemory = Runtime.getRuntime().freeMemory();
long usedMemory = totalMemory - freeMemory;
double usedMB = usedMemory / (1024.0 * 1024.0);
double totalMB = totalMemory / (1024.0 * 1024.0);
System.out.printf(" [MEMORY] %s: %.2f MB Used (Total Heap: %.2f MB)\n",
stage, usedMB, totalMB);
}
/**
* Runs the query test with a specified fetch size and measures performance and memory.
* @param conn The database connection.
* @param fetchSize The JDBC fetch size to use.
* @param label A description for the test.
*/
private static void runTest(Connection conn, int fetchSize, String label) {
System.out.println("\n------------------------------------------------------------");
System.out.println("--- Running Test: " + label + " (Fetch Size: " + fetchSize + ") ---");
System.out.println("------------------------------------------------------------");
String querySQL = "SELECT id, data_value, timestamp FROM " + TABLE_NAME;
long rowsRead = 0;
try (Statement stmt = conn.createStatement()) {
// Set the crucial parameter
stmt.setFetchSize(fetchSize);
System.out.println("Executing query with fetch size " + fetchSize + "...");
displayMemory("1. Before Query Execution");
long startTime = System.currentTimeMillis();
try (ResultSet rs = stmt.executeQuery(querySQL)) {
// CRITICAL MEMORY MEASUREMENT POINT:
// For fetchSize=0, the entire result set is in memory here.
displayMemory("2. After Query Execution (Data Loaded/Cursor Open)");
// Process the result set (simulating application logic)
while (rs.next()) {
rowsRead++;
// Read data to ensure all data is pulled from the driver/server
rs.getString("data_value");
}
long endTime = System.currentTimeMillis();
long duration = endTime - startTime;
System.out.println("Test Complete.");
System.out.println(" Total Rows Read: " + rowsRead);
System.out.printf(" Total Time Taken: %d ms\n", duration);
displayMemory("3. After All Rows Processed");
// Analyze the difference
if (fetchSize == 0) {
System.out.println(" Mode: ALL-AT-ONCE. Expect a large memory spike at stage 2.");
} else {
System.out.println(" Mode: STREAMING. Expect memory usage to remain low and flat across stages 1-3.");
}
}
} catch (SQLException e) {
System.err.println("Error during test run: " + e.getMessage());
System.err.println("HINT: Ensure autoCommit is set to FALSE on the connection for fetchSize > 0 to work.");
}
}
}
Compile and Execute
PostgreSQL JDBC Fetch Size and Memory Test
This application is designed to demonstrate how the JDBC fetchSize parameter affects not only query time but also the client-side heap memory usage. It measures memory at critical stages: before the query, immediately after execution (when data is fetched), and after all rows have been processed.
1. Prerequisites & Database Configuration
To run this test, you will need the following:
- Java Development Kit (JDK): Version 8 or newer.
- PostgreSQL Database: A running instance that you can connect to.
- PostgreSQL JDBC Driver: The official
.jarfile for the driver (e.g.,postgresql-42.x.x.jar), which you can download from the PostgreSQL website.
Before running, you must update the connection details in DatabaseFetchSizeTest.java to match your environment.
2. Running the Application
When running the application, it’s beneficial to give the Java Virtual Machine (JVM) a specific amount of memory. This makes the memory spike during the default fetch (fetchSize = 0) more noticeable.
You can set the maximum heap size using the -Xmx flag when you run the application from your terminal:
# Example: Setting max heap size to 512MB
java -Xmx512m -cp .:path/to/postgresql-42.x.x.jar DatabaseFetchSizeTest
3. Expected Observations
The most critical comparison is the memory usage reported between Stage 1 (Before Query) and Stage 2 (After Query Execution).
| Fetch Size | Stage 1 to Stage 2 Memory Change | Reason |
|---|---|---|
| 0 (Default) | Large Increase | The driver loads all 200,000 rows into the JVM’s memory immediately at Stage 2. |
| > 0 (Streaming) | Minimal Increase | Only a small chunk of rows (e.g., 5000) is loaded into memory at Stage 2, keeping usage low. |
By running this test, you’ll see concrete proof that a streaming fetch (fetchSize > 0) is essential for maintaining memory stability when processing large volumes of data.

