

# Optimizing query performance in Aurora PostgreSQL
<a name="AuroraPostgreSQL.optimizing.queries"></a>

Optimizing query performance is crucial because it helps databases run faster and more efficiently while using fewer resources, which results in better user experience and lower operating costs. Amazon Aurora PostgreSQL provides several features to help optimize query performance for PostgreSQL workloads.

**Topics**
+ [

# Improving query performance for Aurora PostgreSQL with Aurora Optimized Reads
](AuroraPostgreSQL.optimized.reads.md)
+ [

# Optimizing correlated subqueries in Aurora PostgreSQL
](apg-correlated-subquery.md)
+ [

# Improving query performance using adaptive join
](user-apg-adaptive-join.md)
+ [

# Using shared plan cache
](apg-shared-plan-cache.md)

# Improving query performance for Aurora PostgreSQL with Aurora Optimized Reads
<a name="AuroraPostgreSQL.optimized.reads"></a>

You can achieve faster query processing for Aurora PostgreSQL with Aurora Optimized Reads. An Aurora PostgreSQL DB instance that uses Aurora Optimized Reads delivers up to 8x improved query latency and up to 30% cost savings for applications with large datasets, that exceed the memory capacity of a DB instance.

**Topics**
+ [

## Overview of Aurora Optimized Reads in PostgreSQL
](#AuroraPostgreSQL.optimized.reads.overview)
+ [

## Using Aurora Optimized Reads
](#AuroraPostgreSQL.optimized.reads.using)
+ [

## Use cases for Aurora Optimized Reads
](#AuroraPostgreSQL.optimized.reads.usecases)
+ [

## Monitoring DB instances that use Aurora Optimized Reads
](#AuroraPostgreSQL.optimized.reads.monitoring)
+ [

## Best practices for Aurora Optimized Reads
](#AuroraPostgreSQL.optimized.reads.bestpractices)

## Overview of Aurora Optimized Reads in PostgreSQL
<a name="AuroraPostgreSQL.optimized.reads.overview"></a>

Aurora Optimized Reads is available by default when you create a DB cluster that uses Graviton-based R6gd, R8gd, and Intel-based R6id instances with non-volatile memory express (NVMe) storage. It is available from the following PostgreSQL versions:
+ 14.12 and higher versions, 15.7 and higher versions, 16.3 and higher versions, 17.4 and higher versions for R8gd instances
+ 14.9 and higher versions, 15.4 and higher versions, 16.1 and all higher versions for R6gd and R6id instances

Aurora Optimized Reads supports two capabilities: tiered cache and temporary objects.

**Optimized Reads-enabled tiered cache ** - Using tiered cache, you can extend your DB instance caching capacity by up to 5x the instance memory. This automatically maintains the cache to contain the most recent, transactionally consistent data, freeing applications from the overhead of managing the data currency of external result-set based caching solutions. It offers up to 8x better latency for queries that were previously fetching data from Aurora storage.

In Aurora, the value for `shared_buffers` in the default parameter group is usually set to around 75% of the available memory. However, for the r8gd, r6gd, and r6id instance types, Aurora reduces the `shared_buffers` space by 4.5% to host the metadata for the Optimized Reads cache.

**Optimized Reads-enabled temporary objects** - Using temporary objects, you can achieve faster query processing by placing the temporary files that are generated by PostgreSQL on the local NVMe storage. This reduces the traffic to Elastic Block Storage (EBS) over the network. It offers up to 2x better latency and throughput for advanced queries that sort, join, or merge large volumes of data that do not fit within the memory capacity available on a DB instance.

On an Aurora I/O-Optimized cluster, Optimized Reads makes use of both tiered cache and temporary objects on NVMe storage. With Optimized Reads-enabled tiered cache capability, Aurora allocates 2x the instance memory for temporary objects, approximately 10% of the storage for internal operations and the remaining storage as tiered cache. On an Aurora Standard cluster, Optimized Reads makes use of only temporary objects. 

Aurora I/O-Optimized clusters allows you to resize the allocated space for Optimized Reads-enabled temporary objects using the dynamic parameter `aurora_temp_space_size` at the instance level. This resizing feature is available from the following PostgreSQL versions:
+ 16.8 and all higher versions
+ 15.12 and higher 15 versions
+ 14.17 and higher 14 versions

With this parameter, you can resize the capacity from 2x up to 6x of instance memory without requiring database engine restart. When you expand temporary object space, the change takes effect immediately, regardless of concurrent workloads. However, when you reduce the space, the adjustment completes only after there is sufficient unused space in temporary objects to accommodate the new size request. After you resize Optimized Reads-enabled temporary objects, the tiered cache automatically adjusts to use any available space.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.optimized.reads.html)

**Note**  
A switch between IO-Optimized and Standard clusters on a NVMe-based DB instance class causes an immediate database engine restart.

In Aurora PostgreSQL, use the `temp_tablespaces` parameter to configure the table space where the temporary objects are stored.

To check whether the temporary objects are configured, use the following command:

```
postgres=> show temp_tablespaces;
temp_tablespaces
---------------------
aurora_temp_tablespace
(1 row)
```

The `aurora_temp_tablespace` is a tablespace configured by Aurora that points to the NVMe local storage. You can't modify this parameter or switch back to Amazon EBS storage.

To check whether optimized reads cache is turned on, use the following command:

```
postgres=> show shared_preload_libraries;
                 shared_preload_libraries
--------------------------------------------------------
rdsutils,pg_stat_statements,aurora_optimized_reads_cache
```

## Using Aurora Optimized Reads
<a name="AuroraPostgreSQL.optimized.reads.using"></a>

When you provision an Aurora PostgreSQL DB instance with one of the NVMe-based DB instances, the DB instance automatically uses Aurora Optimized Reads.

To turn on Aurora Optimized Reads, do one of the following:
+ Create an Aurora PostgreSQL DB cluster using one of the NVMe-based DB instance classes. For more information, see [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md).
+ Modify an existing Aurora PostgreSQL DB cluster to use one of the NVMe-based DB instance classes. For more information, see [Modifying an Amazon Aurora DB cluster](Aurora.Modifying.md).

Aurora Optimized Reads is available in all AWS Regions where one or more of the DB instance classes with local NVMe SSD storage are supported. For more information, see [Amazon AuroraDB instance classes](Concepts.DBInstanceClass.md).

To switch back to a non-optimized reads Aurora instance, modify the DB instance class of your Aurora instance to the similar instance class without NVMe ephemeral storage for your database workloads. For example, if the current DB instance class is db.r6gd.4xlarge, choose db.r6g.4xlarge to switch back. For more information, see [Modifying an Aurora DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html).

## Use cases for Aurora Optimized Reads
<a name="AuroraPostgreSQL.optimized.reads.usecases"></a>

**Optimized Reads-enabled tiered cache**

The following are some use cases that can benefit from Optimized Reads with tiered cache:
+ Internet scale applications such as payments processing, billing, e-commerce with strict performance SLAs.
+ Real-time reporting dashboards that run hundreds of point queries for metrics/data collection.
+ Generative AI applications with the pgvector extension to search exact or nearest neighbors across millions of vector embeddings.

**Optimized Reads-enabled temporary objects**

The following are some use cases that can benefit from Optimized Reads with temporary objects:
+ Analytical queries that include Common Table Expressions (CTEs), derived tables, and grouping operations.
+ Read replicas that handle the unoptimized queries for an application.
+ On-demand or dynamic reporting queries with complex operations such as GROUP BY and ORDER BY that can't always use appropriate indexes.
+ `CREATE INDEX` or `REINDEX` operations for sorting.
+ Other workloads that use internal temporary tables.

## Monitoring DB instances that use Aurora Optimized Reads
<a name="AuroraPostgreSQL.optimized.reads.monitoring"></a>

You can monitor your queries that use Optimized Reads-enabled tiered cache with the EXPLAIN command as shown in the following example:

```
Postgres=> EXPLAIN (ANALYZE, BUFFERS) SELECT c FROM sbtest15 WHERE id=100000000                   

QUERY PLAN
--------------------------------------------------------------------------------------
 Index Scan using sbtest15_pkey on sbtest15  (cost=0.57..8.59 rows=1 width=121) (actual time=0.287..0.288 rows=1 loops=1)
   Index Cond: (id = 100000000)
   Buffers: shared hit=3 read=2 aurora_orcache_hit=2
   I/O Timings: shared/local read=0.264
 Planning:
   Buffers: shared hit=33 read=6 aurora_orcache_hit=6
   I/O Timings: shared/local read=0.607
 Planning Time: 0.929 ms
 Execution Time: 0.303 ms
(9 rows)
Time: 2.028 ms
```

**Note**  
`aurora_orcache_hit` and `aurora_storage_read` fields in the `Buffers` section of the explain plan are shown only when Optimized Reads is turned on and their values are greater than zero. The read field is the total of the `aurora_orcache_hit` and `aurora_storage_read` fields.

You can monitor DB instances that use Aurora Optimized Reads using the following CloudWatch metrics:
+ `AuroraOptimizedReadsCacheHitRatio`
+ `FreeEphemeralStorage`
+ `ReadIOPSEphemeralStorage`
+ `ReadLatencyEphemeralStorage`
+ `ReadThroughputEphemeralStorage`
+ `WriteIOPSEphemeralStorage`
+ `WriteLatencyEphemeralStorage`
+ `WriteThroughputEphemeralStorage`

These metrics provide data about available instance store storage, IOPS, and throughput. For more information about these metrics, see [Instance-level metrics for Amazon Aurora](Aurora.AuroraMonitoring.Metrics.md#Aurora.AuroraMySQL.Monitoring.Metrics.instances).

You can also use the `pg_proctab` extension to monitor NVMe storage. 

```
postgres=>select * from pg_diskusage();

major | minor |       devname       | reads_completed | reads_merged | sectors_read | readtime | writes_completed | writes_merged | sectors_written | writetime | current_io | iotime  | totaliotime
------+-------+---------------------+-----------------+--------------+--------------+----------+------------------+---------------+-----------------+-----------+------------+---------+-------------
      |       | rdstemp             |           23264 |            0 |       191450 |    11670 |          1750892 |             0 |        24540576 |    819350 |          0 | 3847580 |      831020
      |       | rdsephemeralstorage |           23271 |            0 |       193098 |     2620 |           114961 |             0 |        13845120 |    130770 |          0 |  215010 |      133410
(2 rows)
```

## Best practices for Aurora Optimized Reads
<a name="AuroraPostgreSQL.optimized.reads.bestpractices"></a>

Use the following best practices for Aurora Optimized Reads:
+ Monitor the storage space available on the instance store with the CloudWatch metric `FreeEphemeralStorage`. If the instance store is reaching its limit because of the workload on the DB instance, tune the concurrency and queries which heavily use temporary objects or modify it to use a larger DB instance class.
+ Monitor the CloudWatch metric for the Optimized Reads cache hit rate. Operations like VACUUM modify large numbers of blocks very quickly. This can cause a temporary drop in the hit ratio. The `pg_prewarm` extension can be used to load data into the buffer cache that allows Aurora to proactively write some of those blocks to the Optimized Reads cache.
+ You can enable cluster cache management (CCM) to warm up the buffer cache and tiered cache on a tier-0 reader, which will be used as a failover target. When CCM is enabled, the buffer cache is periodically scanned to write pages eligible for eviction in tiered cache. For more information on CCM, see [Fast recovery after failover with cluster cache management for Aurora PostgreSQL](AuroraPostgreSQL.cluster-cache-mgmt.md). 

# Optimizing correlated subqueries in Aurora PostgreSQL
<a name="apg-correlated-subquery"></a>

 A correlated subquery references table columns from the outer query. It is evaluated once for every row returned by the outer query. In the following example, the subquery references a column from table ot. This table is not included in the subquery’s FROM clause, but it is referenced in the outer query’s FROM clause. If table ot has 1 million rows, the subquery needs to be evaluated 1 million times. 

```
SELECT ot.a, ot.b FROM ot WHERE ot.b < (SELECT AVG(it.b) FROM it WHERE it.a = ot.a);
```

**Note**  
Subquery transformation and subquery cache are available in Aurora PostgreSQL beginning with version 16.8, while Babelfish for Aurora PostgreSQL supports these features from 4.2.0.
Starting with Babelfish for Aurora PostgreSQL versions 4.6.0 and 5.2.0, the following parameters control these features:  
 babelfishpg\$1tsql.apg\$1enable\$1correlated\$1scalar\$1transform 
 babelfishpg\$1tsql.apg\$1enable\$1subquery\$1cache 
By default, both parameters are turned on.

## Improving Aurora PostgreSQL query performance using subquery transformation
<a name="apg-corsubquery-transformation"></a>

Aurora PostgreSQL can accelerate correlated subqueries by transforming them into equivalent outer joins. This optimization applies to the following two types of correlated subqueries:
+ Subqueries that return a single aggregate value and appear in the SELECT list.

  ```
  SELECT ot.a, ot.b, (SELECT AVG(it.b) FROM it WHERE it.a = ot.a) FROM ot;
  ```
+ Subqueries that return a single aggregate value and appear in a WHERE clause.

  ```
  SELECT ot.a, ot.b FROM ot WHERE ot.b < (SELECT AVG(it.b) FROM it WHERE it.a = ot.a);
  ```

### Enabling transformation in the subquery
<a name="apg-corsub-transform"></a>

 To enable the transformation of correlated subqueries into equivalent outer joins, set the `apg_enable_correlated_scalar_transform` parameter to `ON`. The default value of this parameter is `OFF`. 

You can modify the cluster or instance parameter group to set the parameters. To learn more, see [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md).

Alternatively, you can configure the setting for just the current session by the following command:

```
SET apg_enable_correlated_scalar_transform TO ON;
```

### Verifying the transformation
<a name="apg-corsub-transform-confirm"></a>

Use the EXPLAIN command to verify if the correlated subquery has been transformed into an outer join in the query plan. 

 When the transformation is enabled, the applicable correlated subquery part will be transformed into outer join. For example: 

```
postgres=> CREATE TABLE ot (a INT, b INT);
CREATE TABLE
postgres=> CREATE TABLE it (a INT, b INT);
CREATE TABLE

postgres=> SET apg_enable_correlated_scalar_transform TO ON;
SET
postgres=> EXPLAIN (COSTS FALSE) SELECT ot.a, ot.b FROM ot WHERE ot.b < (SELECT AVG(it.b) FROM it WHERE it.a = ot.a);

                         QUERY PLAN
--------------------------------------------------------------
 Hash Join
   Hash Cond: (ot.a = apg_scalar_subquery.scalar_output)
   Join Filter: ((ot.b)::numeric < apg_scalar_subquery.avg)
   ->  Seq Scan on ot
   ->  Hash
         ->  Subquery Scan on apg_scalar_subquery
               ->  HashAggregate
                     Group Key: it.a
                     ->  Seq Scan on it
```

The same query is not transformed when the GUC parameter is turned `OFF`. The plan will not have outer join but subplan instead.

```
postgres=> SET apg_enable_correlated_scalar_transform TO OFF;
SET
postgres=> EXPLAIN (COSTS FALSE) SELECT ot.a, ot.b FROM ot WHERE ot.b < (SELECT AVG(it.b) FROM it WHERE it.a = ot.a);
                QUERY PLAN
----------------------------------------
 Seq Scan on ot
   Filter: ((b)::numeric < (SubPlan 1))
   SubPlan 1
     ->  Aggregate
           ->  Seq Scan on it
                 Filter: (a = ot.a)
```

### Limitations
<a name="apg-corsub-transform-limitations"></a>
+ The subquery must be in the SELECT list or in one of the conditions in the where clause. Otherwise, it won’t be transformed.
+ The subquery must return an aggregate function. User-defined aggregate functions aren't supported for transformation.
+ A subquery whose return expression isn't a simple aggregate function won't be transformed.
+ The correlated condition in subquery WHERE clauses should be a simple column reference. Otherwise, it won’t be transformed.
+ The correlated condition in subquery where clauses must be a plain equality predicate.
+ The subquery can't contain either a HAVING or a GROUP BY clause.
+ The where clause in the subquery may contain one or more predicates combined with AND.

**Note**  
The performance impact of transformation varies depending on your schema, data, and workload. Correlated subquery execution with transformation can significantly improve performance as the number of rows produced by the outer query increases. We strongly recommend that you test this feature in a non-production environment with your actual schema, data, and workload before enabling it in a production environment.

## Using subquery cache to improve Aurora PostgreSQL query performance
<a name="apg-subquery-cache"></a>

 Aurora PostgreSQL supports subquery cache to store the results of correlated subqueries. This feature skips repeated correlated subquery executions when subquery results are already in the cache. 

### Understanding subquery cache
<a name="apg-subquery-cache-understand"></a>

 PostgreSQL’s Memoize node is the key part of subquery cache. The Memoize node maintains a hash table in local cache to map from input parameter values to query result rows. The memory limit for the hash table is the product of work\$1mem and hash\$1mem\$1multiplier. To learn more, see [Resource Consumption](https://www.postgresql.org/docs/16/runtime-config-resource.html). 

 During query execution, subquery cache uses Cache Hit Rate (CHR) to estimate whether the cache is improving query performance and to decide at query runtime whether to continue using the cache. CHR is the ratio of the number of cache hits to the total number of requests. For example, if a correlated subquery needs to be executed 100 times, and 70 of those execution results can be retrieved from the cache, the CHR is 0.7.

For every apg\$1subquery\$1cache\$1check\$1interval number of cache misses, the benefit of subquery cache is evaluated by checking whether the CHR is larger than apg\$1subquery\$1cache\$1hit\$1rate\$1threshold. If not, the cache will be deleted from memory, and the query execution will return to the original, uncached subquery re-execution. 

### Parameters that control subquery cache behavior
<a name="apg-subquery-cache-parameters"></a>

The following table lists the parameters that control the behavior of the subquery cache.


|  Parameter  | Description  | Default | Allowed  | 
| --- | --- | --- | --- | 
| apg\$1enable\$1subquery\$1cache  | Enables the use of cache for correlated scalar subqueries.  | OFF  | ON, OFF | 
| apg\$1subquery\$1cache\$1check\$1interval  | Sets the frequency, in number of cache misses, to evaluate subquery cache hit rate.   | 500  | 0–2147483647 | 
| apg\$1subquery\$1cache\$1hit\$1rate\$1threshold  | Sets the threshold for subquery cache hit rate.  | 0.3  | 0.0–1.0 | 

**Note**  
Larger values of `apg_subquery_cache_check_interval` may improve the accuracy of the CHR-based cache benefit estimation, but will increase the cache overhead, since CHR won’t get evaluated until the cache table has `apg_subquery_cache_check_interval` rows. 
Larger values of `apg_subquery_cache_hit_rate_threshold` bias towards abandoning subquery cache and returning back to the original, uncached subquery re-execution. 

You can modify the cluster or instance parameter group to set the parameters. To learn more, see [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md).

Alternatively, you can configure the setting for just the current session by the following command:

```
SET apg_enable_subquery_cache TO ON;
```

### Turning on subquery cache in Aurora PostgreSQL
<a name="apg-subquery-cache-turningon"></a>

When subquery cache is enabled, Aurora PostgreSQL applies cache to save subquery results. The query plan will then have a Memoize node under SubPlan. 

 For example, the following command sequence shows the estimated query execution plan of a simple correlated subquery without subquery cache. 

```
postgres=> SET apg_enable_subquery_cache TO OFF;
SET
postgres=> EXPLAIN (COSTS FALSE) SELECT ot.a, ot.b FROM ot WHERE ot.b < (SELECT it.b FROM it WHERE it.a = ot.a);

             QUERY PLAN
------------------------------------
 Seq Scan on ot
   Filter: (b < (SubPlan 1))
   SubPlan 1
     ->  Seq Scan on it
           Filter: (a = ot.a)
```

After turning on `apg_enable_subquery_cache`, the query plan will contain a Memoize node under the SubPlan node, indicating that the subquery is planning to use cache.

```
postgres=> SET apg_enable_subquery_cache TO ON;
SET
postgres=> EXPLAIN (COSTS FALSE) SELECT ot.a, ot.b FROM ot WHERE ot.b < (SELECT it.b FROM it WHERE it.a = ot.a);

             QUERY PLAN
------------------------------------
 Seq Scan on ot
   Filter: (b < (SubPlan 1))
   SubPlan 1
     ->  Memoize
           Cache Key: ot.a
           Cache Mode: binary
           ->  Seq Scan on it
                 Filter: (a = ot.a)
```

 The actual query execution plan contains more details of the subquery cache, including cache hits and cache misses. The following output shows the actual query execution plan of the above example query after inserting some values to the tables. 

```
postgres=> EXPLAIN (COSTS FALSE, TIMING FALSE, ANALYZE TRUE) SELECT ot.a, ot.b FROM ot WHERE ot.b < (SELECT it.b FROM it WHERE it.a = ot.a);
            QUERY PLAN
-----------------------------------------------------------------------------
 Seq Scan on ot (actual rows=2 loops=1)
   Filter: (b < (SubPlan 1))
   Rows Removed by Filter: 8
   SubPlan 1
     ->  Memoize (actual rows=0 loops=10)
           Cache Key: ot.a
           Cache Mode: binary
           Hits: 4  Misses: 6  Evictions: 0  Overflows: 0  Memory Usage: 1kB
           ->  Seq Scan on it (actual rows=0 loops=6)
                 Filter: (a = ot.a)
                 Rows Removed by Filter: 4
```

The total cache hit number is 4, and the total cache miss number is 6. If the total number of hits and misses is less than the number of loops in the Memoize node, it means that the CHR evaluation did not pass and the cache was cleaned up and abandoned at some point. The subquery execution then returned back to the original uncached re-execution.

### Limitations
<a name="apg-subquery-cache-limitations"></a>

Subquery cache does not support certain patterns of correlated subqueries. Those types of queries will be run without cache, even if subquery cache is turned on:
+ IN/EXISTS/ANY/ALL correlated subqueries
+ Correlated subqueries containing nondeterministic functions. 
+ Correlated subqueries that reference outer table columns with datatypes that don't support hashing or equality operations.

# Improving query performance using adaptive join
<a name="user-apg-adaptive-join"></a>

## Overview
<a name="user-apg-adaptive-join-intro"></a>

Adaptive join is a preview feature in Aurora PostgreSQL 17.4 that helps improve query performance. This feature is disabled by default, but you can enable it using Global User Configuration (GUC) parameters. Since this is a preview feature, the default parameter values might change. When enabled, adaptive join helps optimize query performance by dynamically switching from a nested loop join to a hash join at runtime. This switch occurs when the PostgreSQL optimizer has incorrectly chosen a nested loop join due to inaccurate cardinality estimates.

## Configuring adaptive join
<a name="user-apg-adaptive-join-config"></a>

You can control adaptive join using these three GUC parameters:


**Adaptive join configuration parameters**  

| GUC parameter | Description | Default and configuration options | 
| --- | --- | --- | 
| apg\$1adaptive\$1join\$1crossover\$1multiplier | This multiplier works with the row crossover point to determine when to switch from a nested loop to a hash join. The row crossover point is where the SQL optimizer estimates that nested loop and hash join operations have equal cost. A higher multiplier value reduces the likelihood of adaptive join switching to a hash join. |  Controls whether Adaptive Join is enabled [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/user-apg-adaptive-join.html)  | 
| apg\$1adaptive\$1join\$1cost\$1threshold | This parameter sets a minimum query cost threshold. Adaptive join automatically disables itself for queries below this threshold. This prevents performance overhead in simple queries where the cost of planning an adaptive join could exceed the benefits of switching from nested loop to hash join. |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/user-apg-adaptive-join.html)  | 
| apg\$1enable\$1parameterized\$1adaptive\$1join | This parameter extends adaptive join functionality to parameterized nested loop joins when enabled. By default, adaptive join works only with unparameterized nested loop joins, as these are more likely to benefit from switching to hash join. Parameterized nested loop joins typically perform better, making the switch to hash join less critical. |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/user-apg-adaptive-join.html) Requires `apg_adaptive_join_crossover_multiplier` to be enabled first  | 

# Using shared plan cache
<a name="apg-shared-plan-cache"></a>

## Overview
<a name="apg-shared-plan-cache-overview"></a>

Aurora PostgreSQL uses a process-per-user model where each client connection creates a dedicated backend process. Each backend process maintains its own local plan cache for prepared statements. Because these caches can't be shared between processes, applications that use many prepared statements might create duplicate caches across different backend processes, which leads to increased memory usage.

Aurora PostgreSQL versions 17.6 and later and 16.10 and later introduce shared plan cache functionality. When you enable this feature, backend processes can share generic plans, which reduces memory usage and improves performance by eliminating duplicate plan generation.

The shared plan cache uses the following components as its cache key:
+ Query string (including comments)
+ Planner-related GUC parameters (including `search_path`)
+ User ID
+ Database ID

Instance restarts reset the shared cache.

## Parameters
<a name="apg-shared-plan-cache-parameters"></a>

The following table describes the parameters that control the shared plan cache feature:


| Parameter | Description | Default | Allowed | 
| --- | --- | --- | --- | 
| apg\$1shared\$1plan\$1cache.enable | Turns shared plan cache on or off | 0 (OFF) | 0, 1 | 
| apg\$1shared\$1plan\$1cache.max | The maximum number of cache entries | 200–1000 (instance size dependent) | 100–50000 | 
| apg\$1shared\$1plan\$1cache.min\$1size\$1per\$1entry | The minimum plan size to store in shared cache. Smaller plans use local cache to optimize OLTP performance. | 16 KB | 0–32768 (KB) | 
| apg\$1shared\$1plan\$1cache.max\$1size\$1per\$1entry | The maximum plan size for shared cache. Larger plans store only cost information. | 256 KB–4 MB (instance size dependent) | 0–32768 (KB) | 
| apg\$1shared\$1plan\$1cache.idle\$1generic\$1plan\$1release\$1timeout | The time after which idle sessions release local generic plans. Lower values save memory; higher values might improve performance. | 10 seconds | 0–2147483647 (ms) | 

**Note**  
You can modify all parameters without restarting.

## Monitoring views and functions
<a name="apg-shared-plan-cache-monitoring"></a>
+ `apg_shared_plan_cache()` – Shows detailed cache entry information (hits, validity, timestamps)
+ `apg_shared_plan_cache_stat()` – Displays instance-level statistics (evictions, invalidations)
+ `apg_shared_plan_cache_reset()` – Removes all entries in `apg_shared_plan_cache()` and `apg_shared_plan_cache_stat()`
+ `apg_shared_plan_cache_remove(cache_key)` – Removes an entry from `apg_shared_plan_cache()` where the entry matches the `cache_key`

## Limitations
<a name="apg-shared-plan-cache-limitations"></a>
+ Only works with prepared statements and doesn't cache PL/pgSQL statements
+ Doesn't cache a query that contains temporary tables or catalog tables
+ Doesn't cache a query that depends on RLS (Row-Level Security)
+ Each replica maintains its own cache (no cross-replica sharing)