

# SQL feature compatibility in Aurora DSQL
<a name="working-with-postgresql-compatibility"></a>

In the following sections, learn about Aurora DSQL support for PostgreSQL data types and SQL commands.

**Topics**
+ [Supported data types in Aurora DSQL](working-with-postgresql-compatibility-supported-data-types.md)
+ [Supported SQL for Aurora DSQL](working-with-postgresql-compatibility-supported-sql-features.md)
+ [Supported subsets of SQL commands in Aurora DSQL](working-with-postgresql-compatibility-supported-sql-subsets.md)
+ [Migrating from PostgreSQL to Aurora DSQL](working-with-postgresql-compatibility-migration-guide.md)

# Supported data types in Aurora DSQL
<a name="working-with-postgresql-compatibility-supported-data-types"></a>

Aurora DSQL supports a subset of the common PostgreSQL types.

**Topics**
+ [Numeric data types](#numeric-data-types)
+ [Character data types](#character-data-types)
+ [Date and time data types](#date-time-data-types)
+ [Miscellaneous data types](#miscellaneous-data-types)
+ [Query runtime data types](#working-with-postgresql-compatibility-query-runtime)

## Numeric data types
<a name="numeric-data-types"></a>

Aurora DSQL supports the following PostgreSQL numeric data types.


| Name | Aliases | Range and precision | Storage size | Index support | 
| --- | --- | --- | --- | --- | 
| smallint | int2 | -32768 to \$132767 | 2 bytes | Yes | 
|  `integer`  |  `int`, `int4`  |  -2147483648 to \$12147483647  |  4 bytes  | Yes | 
|  `bigint`  |  `int8`  |  -9223372036854775808 to \$19223372036854775807  |  8 bytes  | Yes | 
|  `real`  |  `float4`  |  6 decimal digits precision  |  4 bytes  | Yes | 
|  `double precision`  |  `float8`  |  15 decimal digits precision  |  8 bytes  | Yes | 
|  `numeric` [ `(`*p*, *s*`)` ]  |  `decimal` [ `(`*p*, *s*`)` ] `dec`[ `(`*p*,*s*`)`]  |  Exact numeric of selectable precision. The maximum precision is 38 and the maximum scale is 37.1 The default is `numeric (18,6)`.  |  8 bytes \$1 2 bytes per precision digit. Maximum size is 27 bytes.  | Yes | 

1 – If you don't explicitly specify a size when you run `CREATE TABLE` or `ALTER TABLE ADD COLUMN`, Aurora DSQL enforces the defaults. Aurora DSQL applies limits when you run `INSERT` or `UPDATE` statements.

## Character data types
<a name="character-data-types"></a>

Aurora DSQL supports the following PostgreSQL character data types.


| Name | Aliases | Description | Aurora DSQL limit | Storage size | Index support | 
| --- | --- | --- | --- | --- | --- | 
|  `character` [ `(`*n*`)` ]  |  `char` [ `(`*n*`)` ]  |  Fixed-length character string  |  4096 bytes1   |  Variable up to 4100 bytes  | Yes | 
|  `character varying` [ `(`*n*`)` ]  |  `varchar` [ `(`*n*`)` ]  |  Variable-length character string  |  65535 bytes1   |  Variable up to 65539 bytes  | Yes | 
|  `bpchar` [ `(`*n*`)` ]  |    |  If fixed length, this is an alias for `char`. If variable length, this is an alias for `varchar`, where trailing spaces are semantically insignificant.  |  4096 bytes1   |  Variable up to 4100 bytes  | Yes | 
|  `text`  |    |  Variable-length character string  |  1 MiB1   |  Variable up to 1 MiB  | Yes | 

1 – If you don't explicitly specify a size when you run `CREATE TABLE` or `ALTER TABLE ADD COLUMN`, then Aurora DSQL enforces the defaults. Aurora DSQL applies limits when you run `INSERT` or `UPDATE` statements.

## Date and time data types
<a name="date-time-data-types"></a>

Aurora DSQL supports the following PostgreSQL date and time data types.


| Name | Aliases | Description | Range | Resolution | Storage size | Index support | 
| --- | --- | --- | --- | --- | --- | --- | 
|  `date`  |    |  Calendar date (year, month, day)  |  4713 BC – 5874897 AD  | 1 day |  4 bytes  | Yes | 
|  `time` [ `(`*p*`)` ] [ `without time zone` ]  |  `timestamp`  |  Time of day, with no time zone  | 0 – 1 | 1 microsecond |  8 bytes  | Yes | 
|  `time` [ `(`*p*`)` ] `with time zone`  |  `timetz`  |  time of day, including time zone  |  00:00:00\$11559 – 24:00:00 –1559  | 1 microsecond |  12 bytes  | No | 
|  `timestamp` [ `(`*p*`)` ] [ `without time zone` ]  |    |  Date and time, with no time zone  | 4713 BC – 294276 AD | 1 microsecond |  8 bytes  | Yes | 
|  `timestamp` [ `(`*p*`)` ] `with time zone`  |  `timestamptz`  |  Date and time, including time zone  | 4713 BC – 294276 AD | 1 microsecond |  8 bytes  | Yes | 
|  `interval` [ `fields` ] [ `(`*p*`)` ]  |    |  Time span  | -178000000 years – 178000000 years | 1 microsecond |  16 bytes  | No | 

## Miscellaneous data types
<a name="miscellaneous-data-types"></a>

Aurora DSQL supports the following miscellaneous PostgreSQL data types.


| Name | Aliases | Description | Aurora DSQL limit | Storage size | Index support | 
| --- | --- | --- | --- | --- | --- | 
|  `boolean`  |  `bool`  |  Logical Boolean (true/false)  |    |  1 byte  | Yes | 
|  `bytea`  |    |  Binary data ("byte array")  |  1 MiB1   |  Variable up to 1 MiB limit  | No | 
|  `UUID`  |    |  Universally unique identifier  |    |  16 bytes  | Yes | 

1 – If you don't explicitly specify a size when you run `CREATE TABLE` or `ALTER TABLE ADD COLUMN`, then Aurora DSQL enforces the defaults. Aurora DSQL applies limits when you run `INSERT` or `UPDATE` statements.

## Query runtime data types
<a name="working-with-postgresql-compatibility-query-runtime"></a>

Query runtime data types are internal data types used at query execution time. These types are distinct from the PostgreSQL-compatible types like `varchar` and `integer` that you define in your schema. Instead, these types are runtime representations that Aurora DSQL uses when processing a query.

The following data types are supported only during query runtime:

**Array type**  
Aurora DSQL supports arrays of the supported data types. For example, you can have an array of integers. The function `string_to_array` splits a string into a PostgreSQL-style array with the comma delimiter (`,`) as shown in the following example. You can use arrays in expressions, function outputs, or temporary computations during query execution.  

```
SELECT string_to_array('1,2', ',');
```
The function returns a response similar to the following:  

```
 string_to_array 
-----------------
 {1,2}
(1 row)
```

****inet type****  
The data type represents IPv4, IPv6 host addresses, and their subnets. This type is useful when parsing logs, filtering on IP subnets, or doing network calculations within a query. For more information, see [inet in the PostgreSQL documentation](https://www.PostgreSQL.org/docs/16/datatype-net-types.html#DATATYPE-INET).

**JSON runtime functions**  
Aurora DSQL supports JSON and JSONB as runtime data types for query processing. Store JSON data as `text` columns and cast to JSON during query execution to use PostgreSQL JSON functions and operators.  
Aurora DSQL supports most PostgreSQL JSON functions from [section 9.1.6 JSON Functions and Operators](https://www.postgresql.org/docs/current/functions-json.html) with identical behavior.  
Functions that return JSON or JSONB types may require additional casting to `text` for proper display.  

```
SELECT json_build_array(1, 2, 'foo', 4, 5)::text;
```
The function returns a response similar to the following:  

```
     json_build_array
 ---------------------
   [1, 2, "foo", 4, 5]
 (1 row)
```

# Supported SQL for Aurora DSQL
<a name="working-with-postgresql-compatibility-supported-sql-features"></a>

Aurora DSQL supports a wide range of core PostgreSQL SQL features. In the following sections, you can learn about general PostgreSQL expression support. This list is not exhaustive.

## `SELECT` command
<a name="dsql-select"></a>

Aurora DSQL supports the following clauses of the `SELECT` command.


| Primary clause | Supported clauses | 
| --- | --- | 
|  `FROM`  |    | 
|  `GROUP BY`  |  `ALL`, `DISTINCT`  | 
|  `ORDER BY`  |  `ASC`, `DESC`, `NULLS`  | 
|  `LIMIT`  |    | 
|  `DISTINCT`  |    | 
|  `HAVING`  |    | 
|  `USING`  |    | 
|  `WITH` (common table expressions)  |    | 
|  `INNER JOIN`  |  `ON`  | 
|  `OUTER JOIN`  |  `LEFT`, `RIGHT`, `FULL`, `ON`  | 
|  `CROSS JOIN`  |  `ON`  | 
|  `UNION`  |  `ALL`  | 
|  `INTERSECT`  |  `ALL`  | 
|  `EXCEPT`  |  `ALL`  | 
|  `OVER`  |  `RANK ()`, `PARTITION BY`  | 
|  `FOR UPDATE`  |    | 

## Data Definition Language (DDL)
<a name="dsql-ddl"></a>

Aurora DSQL supports the following PostgreSQL DDL commands.


| Command | Primary Clause | Supported Clauses | 
| --- | --- | --- | 
|  `CREATE`  |  `TABLE`  |  For information about the supported syntax of the `CREATE TABLE` command, see [`CREATE TABLE`](create-table-syntax-support.md).  | 
|  `ALTER`  |  `TABLE`  |  For information about the supported syntax of the `ALTER TABLE` command, see [`ALTER TABLE`](alter-table-syntax-support.md).  | 
|  `DROP`  |  `TABLE`  |    | 
|  `CREATE`  |  `[UNIQUE] INDEX ASYNC`  |  You can use this command with the following parameters: `ON`, `NULLS FIRST`, `NULLS LAST`. For information about the supported syntax of the `CREATE INDEX ASYNC` command, see [Asynchronous indexes in Aurora DSQL](working-with-create-index-async.md).  | 
|  `DROP`  |  `INDEX`  |    | 
|  `CREATE`  |  `VIEW`  |  For more information about the supported syntax of the `CREATE VIEW` command, see [`CREATE VIEW`](create-view.md).   | 
| ALTER | VIEW |  For information about the supported syntax of the `ALTER VIEW` command, see [`ALTER VIEW`](alter-view-syntax-support.md).  | 
| DROP | VIEW | For information about the supported syntax of the DROP VIEW command, see [`DROP VIEW`](drop-view-overview.md). | 
|  `CREATE`  |  `SEQUENCE`  |  For information about the supported syntax of the `CREATE SEQUENCE` command, see [`CREATE SEQUENCE`](create-sequence-syntax-support.md).  | 
|  `ALTER`  |  `SEQUENCE`  |  For information about the supported syntax of the `ALTER SEQUENCE` command, see [`ALTER SEQUENCE`](alter-sequence-syntax-support.md).  | 
|  `DROP`  |  `SEQUENCE`  |  For information about the supported syntax of the `DROP SEQUENCE` command, see [`DROP SEQUENCE`](drop-sequence-syntax-support.md).  | 
|  `CREATE`  |  `ROLE`, `WITH`  |    | 
|  `CREATE`  |  `FUNCTION`  |  `LANGUAGE SQL`  | 
|  `CREATE`  |  `DOMAIN`  |    | 

## Data Manipulation Language (DML)
<a name="dsql-dml"></a>

Aurora DSQL supports the following PostgreSQL DML commands.


| Command | Primary clause | Supported clauses | 
| --- | --- | --- | 
|  `INSERT`  |  `INTO`  | `VALUES`SELECT | 
|  `UPDATE`  |  `SET`  |  `WHERE (SELECT)` `FROM, WITH`  | 
| DELETE | FROM | USING, WHERE | 

## Data Control Language (DCL)
<a name="dsql-dcl"></a>

Aurora DSQL supports the following PostgreSQL DCL commands.


| Command | Supported clauses | 
| --- | --- | 
|  `GRANT`  |  `ON`, `TO`  | 
|  `REVOKE`  |  `ON`, `FROM`, `CASCADE`, `RESTRICT`  | 

## Transaction Control Language (TCL)
<a name="dsql-tcl"></a>

Aurora DSQL supports the following PostgreSQL TCL commands.


| Command | Supported clauses | Alias | 
| --- | --- | --- | 
|  `COMMIT`  |  [`WORK` \$1 `TRANSACTION`] [`AND NO CHAIN`]  |  `END`  | 
|  `BEGIN`  |  [`WORK` \$1 `TRANSACTION`] [`ISOLATION LEVEL REPEATABLE READ`] [`READ WRITE` \$1 `READ ONLY`]  |    | 
|  `START TRANSACTION`  |  [`ISOLATION LEVEL REPEATABLE READ`] [`READ WRITE` \$1 `READ ONLY`]  |    | 
|  `ROLLBACK`  |  [`WORK` \$1 `TRANSACTION`] [`AND NO CHAIN`]  |  `ABORT`  | 

## Utility commands
<a name="dsql-utility"></a>

Aurora DSQL supports the following PostgreSQL utility commands:
+ `EXPLAIN`
+ `ANALYZE` (relation name only)

# Supported subsets of SQL commands in Aurora DSQL
<a name="working-with-postgresql-compatibility-supported-sql-subsets"></a>

This section provides detailed information about supported SQL commands, focusing on commands with extensive parameter sets and subcommands. For example, CREATE TABLE in PostgreSQL offers many clauses and parameters, a subset of which are supported by Aurora DSQL. This section describes the supported subsets of common SQL commands using familiar PostgreSQL syntax elements that Aurora DSQL supports.

**Topics**
+ [`CREATE TABLE`](create-table-syntax-support.md)
+ [`ALTER TABLE`](alter-table-syntax-support.md)
+ [`CREATE SEQUENCE`](create-sequence-syntax-support.md)
+ [`ALTER SEQUENCE`](alter-sequence-syntax-support.md)
+ [`DROP SEQUENCE`](drop-sequence-syntax-support.md)
+ [`CREATE VIEW`](create-view.md)
+ [`ALTER VIEW`](alter-view-syntax-support.md)
+ [`DROP VIEW`](drop-view-overview.md)

# `CREATE TABLE`
<a name="create-table-syntax-support"></a>

`CREATE TABLE` defines a new table.

```
CREATE TABLE [ IF NOT EXISTS ] table_name ( [
  { column_name data_type [ column_constraint [ ... ] ]
    | table_constraint
    | LIKE source_table [ like_option ... ] }
    [, ... ]
] )

where column_constraint is:

[ CONSTRAINT constraint_name ]
{ NOT NULL |
  NULL |
  CHECK ( expression )|
  DEFAULT default_expr |
  GENERATED ALWAYS AS ( generation_expr ) STORED |
  GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY ( sequence_options ) |
  UNIQUE [ NULLS [ NOT ] DISTINCT ] index_parameters |
  PRIMARY KEY index_parameters |

and table_constraint is:

[ CONSTRAINT constraint_name ]
{ CHECK ( expression ) |
  UNIQUE [ NULLS [ NOT ] DISTINCT ] ( column_name [, ... ] ) index_parameters |
  PRIMARY KEY ( column_name [, ... ] ) index_parameters |

and like_option is:

{ INCLUDING | EXCLUDING } { COMMENTS | CONSTRAINTS | DEFAULTS | GENERATED | IDENTITY | INDEXES | STATISTICS | ALL }

index_parameters in UNIQUE, and PRIMARY KEY constraints are:
[ INCLUDE ( column_name [, ... ] ) ]
```

## Identity columns
<a name="create-table-identity-columns"></a>

**Note**  
When using identity columns, the cache value should be carefully considered. For more information, see the Important callout on the [`CREATE SEQUENCE`](create-sequence-syntax-support.md) page.  
For guidance on how best to use identity columns based on workload patterns, see [Working with sequences and identity columns](sequences-identity-columns-working-with.md).

The `GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY ( sequence_options )` clause creates the column as an *identity column*. It will have an implicit sequence attached to it and in newly-inserted rows the column will automatically have values from the sequence assigned to it. Such a column is implicitly `NOT NULL`.

The clauses `ALWAYS` and `BY DEFAULT` determine how explicitly user-specified values are handled in `INSERT` and `UPDATE` commands.

In an `INSERT` command, if `ALWAYS` is selected, a user-specified value is only accepted if the `INSERT` statement specifies `OVERRIDING SYSTEM VALUE`. If `BY DEFAULT` is selected, then the user-specified value takes precedence.

In an `UPDATE` command, if `ALWAYS` is selected, any update of the column to any value other than `DEFAULT` will be rejected. If `BY DEFAULT` is selected, the column can be updated normally. (There is no `OVERRIDING` clause for the `UPDATE` command.)

The *sequence\$1options* clause can be used to override the parameters of the sequence. The available options include those shown for [`CREATE SEQUENCE`](create-sequence-syntax-support.md), plus `SEQUENCE NAME name`. Without `SEQUENCE NAME`, the system chooses an unused name for the sequence.

# `ALTER TABLE`
<a name="alter-table-syntax-support"></a>

`ALTER TABLE` changes the definition of a table.

```
ALTER TABLE [ IF EXISTS ] [ ONLY ] name [ * ]
    action [, ... ]
ALTER TABLE [ IF EXISTS ] [ ONLY ] name [ * ]
    RENAME [ COLUMN ] column_name TO new_column_name
ALTER TABLE [ IF EXISTS ] [ ONLY ] name [ * ]
    RENAME CONSTRAINT constraint_name TO new_constraint_name
ALTER TABLE [ IF EXISTS ] name
    RENAME TO new_name
ALTER TABLE [ IF EXISTS ] name
    SET SCHEMA new_schema

where action is one of:

    ADD [ COLUMN ] [ IF NOT EXISTS ] column_name data_type
    ADD table_constraint_using_index
    ALTER [ COLUMN ] column_name { SET GENERATED { ALWAYS | BY DEFAULT } | SET sequence_option | RESTART [ [ WITH ] restart ] } [...]
    ALTER [ COLUMN ] column_name DROP IDENTITY [ IF EXISTS ]
    OWNER TO { new_owner | CURRENT_ROLE | CURRENT_USER | SESSION_USER }

and table_constraint_using_index is:

    [ CONSTRAINT constraint_name ]
    UNIQUE USING INDEX index_name
```

## Identity column actions
<a name="alter-table-identity-columns"></a>

**`SET GENERATED { ALWAYS | BY DEFAULT }` / `SET sequence_option` / `RESTART`**  
These forms change whether a column is an identity column or change the generation attribute of an existing identity column. See [`CREATE TABLE`](create-table-syntax-support.md) for details. Like `SET DEFAULT`, these forms only affect the behavior of subsequent `INSERT` and `UPDATE` commands; they do not cause rows already in the table to change.  
The *sequence\$1option* is an option supported by [`ALTER SEQUENCE`](alter-sequence-syntax-support.md) such as `INCREMENT BY`. These forms alter the sequence that underlies an existing identity column.

**`DROP IDENTITY [ IF EXISTS ]`**  
This form removes the identity property from a column. If `DROP IDENTITY IF EXISTS` is specified and the column is not an identity column, no error is thrown. In this case a notice is issued instead.

## Add constraint actions
<a name="alter-table-add-constraint"></a>

**`ADD table_constraint_using_index`**  
This form adds a new `UNIQUE` constraint to a table based on an existing unique index. All the columns of the index will be included in the constraint.  
The index must be in a `VALID` state; adding a unique constraint using an index while the index is currently building is not supported.  
If a constraint name is provided then the index will be renamed to match the constraint name. Otherwise the constraint will be named the same as the index.  
After this command is executed, the index is "owned" by the constraint, in the same way as if the index had been built by a regular `CREATE UNIQUE INDEX ASYNC` command. In particular, dropping the constraint will make the index disappear too.

# `CREATE SEQUENCE`
<a name="create-sequence-syntax-support"></a>

`CREATE SEQUENCE` — define a new sequence generator.

**Important**  
In PostgreSQL, specifying `CACHE` is optional and defaults to 1. In a distributed system such as Amazon Aurora DSQL, sequence operations involve coordination, and a cache size of 1 can increase coordination overhead under high concurrency. While larger cache values allow sequence numbers to be served from locally preallocated ranges, improving throughput, unused reserved values can be lost, making gaps and ordering effects more visible. Because applications differ in their sensitivity to allocation ordering versus throughput, Amazon Aurora DSQL requires `CACHE` to be specified explicitly and currently supports `CACHE = 1` or `CACHE >= 65536`, providing a clear distinction between allocation behavior that is closer to strictly sequential generation and allocation optimized for highly concurrent workloads.  
When `CACHE >= 65536`, sequence values remain guaranteed to be unique but might not be generated in strict increasing order across sessions, and gaps can occur, particularly when cached values are not fully consumed. These characteristics are consistent with PostgreSQL semantics for cached sequences under concurrent use, where both systems guarantee distinct values but do not guarantee strictly sequential ordering across sessions.  
Within a single client session, sequence values may not always appear strictly increasing, particularly outside explicit transactions. This behavior is similar to PostgreSQL deployments that use connection pooling. Allocation behavior closer to a single-session PostgreSQL environment can be achieved by using `CACHE = 1` or by obtaining sequence values within explicit transactions.  
With `CACHE = 1`, sequence allocation follows PostgreSQL's non-cached sequence behavior.  
For guidance on how best to use sequences based on workload patterns, see [Working with sequences and identity columns](sequences-identity-columns-working-with.md).

## Supported syntax
<a name="create-sequence-supported-syntax"></a>

```
CREATE SEQUENCE [ IF NOT EXISTS ] name CACHE cache
    [ AS data_type ]
    [ INCREMENT [ BY ] increment ]
    [ MINVALUE minvalue | NO MINVALUE ] [ MAXVALUE maxvalue | NO MAXVALUE ]
    [ [ NO ] CYCLE ]
    [ START [ WITH ] start ]
    [ OWNED BY { table_name.column_name | NONE } ]

where data_type is BIGINT
      and cache = 1 or cache >= 65536
```

## Description
<a name="create-sequence-description"></a>

`CREATE SEQUENCE` creates a new sequence number generator. This involves creating and initializing a new special single-row table with the name *name*. The generator will be owned by the user issuing the command.

If a schema name is given then the sequence is created in the specified schema. Otherwise it is created in the current schema. The sequence name must be distinct from the name of any other relation (table, sequence, index, view, materialized view, or foreign table) in the same schema.

After a sequence is created, you use the functions `nextval`, `currval`, and `setval` to operate on the sequence. These functions are documented in [Sequence manipulation functions](sequence-functions-syntax-support.md).

Although you cannot update a sequence directly, you can use a query like:

```
SELECT * FROM name;
```

to examine some of the parameters and current state of a sequence. In particular, the `last_value` field of the sequence shows the last value allocated by any session. (Of course, this value might be obsolete by the time it is printed, if other sessions are actively doing `nextval` calls.) Other parameters such as *increment* and *maxvalue* can be observed in the `pg_sequences` view.

## Parameters
<a name="create-sequence-parameters"></a>

**`IF NOT EXISTS`**  
Do not throw an error if a relation with the same name already exists. A notice is issued in this case. Note that there is no guarantee that the existing relation is anything like the sequence that would have been created — it might not even be a sequence.

***name***  
The name (optionally schema-qualified) of the sequence to be created.

***data\$1type***  
The optional clause `AS data_type` specifies the data type of the sequence. Valid types are `bigint`. `bigint` is the default. The data type determines the default minimum and maximum values of the sequence.

***increment***  
The optional clause `INCREMENT BY increment` specifies which value is added to the current sequence value to create a new value. A positive value will make an ascending sequence, a negative one a descending sequence. The default value is 1.

***minvalue* / `NO MINVALUE`**  
The optional clause `MINVALUE minvalue` determines the minimum value a sequence can generate. If this clause is not supplied or `NO MINVALUE` is specified, then defaults will be used. The default for an ascending sequence is 1. The default for a descending sequence is the minimum value of the data type.

***maxvalue* / `NO MAXVALUE`**  
The optional clause `MAXVALUE maxvalue` determines the maximum value for the sequence. If this clause is not supplied or `NO MAXVALUE` is specified, then default values will be used. The default for an ascending sequence is the maximum value of the data type. The default for a descending sequence is -1.

**`CYCLE` / `NO CYCLE`**  
The `CYCLE` option allows the sequence to wrap around when the *maxvalue* or *minvalue* has been reached by an ascending or descending sequence respectively. If the limit is reached, the next number generated will be the *minvalue* or *maxvalue*, respectively.  
If `NO CYCLE` is specified, any calls to `nextval` after the sequence has reached its maximum value will return an error. If neither `CYCLE` or `NO CYCLE` are specified, `NO CYCLE` is the default.

***start***  
The optional clause `START WITH start` allows the sequence to begin anywhere. The default starting value is *minvalue* for ascending sequences and *maxvalue* for descending ones.

***cache***  
The clause `CACHE cache` specifies how many sequence numbers are to be preallocated and stored in memory for faster access. The acceptable values for `CACHE` in Aurora DSQL are 1 or any number >= 65536. The minimum value is 1 (only one value can be generated at a time, meaning no cache).

**`OWNED BY table_name.column_name` / `OWNED BY NONE`**  
The `OWNED BY` option causes the sequence to be associated with a specific table column, such that if that column (or its whole table) is dropped, the sequence will be automatically dropped as well. The specified table must have the same owner and be in the same schema as the sequence. `OWNED BY NONE`, the default, specifies that there is no such association.

## Notes
<a name="create-sequence-notes"></a>

Use [`DROP SEQUENCE`](drop-sequence-syntax-support.md) to remove a sequence.

Sequences are based on `bigint` arithmetic, so the range cannot exceed the range of an eight-byte integer (-9223372036854775808 to 9223372036854775807).

Because `nextval` and `setval` calls are never rolled back, sequence objects cannot be used if "gapless" assignment of sequence numbers is needed.

Each session will allocate and cache successive sequence values during one access to the sequence object and increase the sequence object's `last_value` accordingly. Then, the next *cache*-1 uses of `nextval` within that session simply return the preallocated values without touching the sequence object. So, any numbers allocated but not used within a session will be lost when that session ends, resulting in "holes" in the sequence.

Furthermore, although multiple sessions are guaranteed to allocate distinct sequence values, the values might be generated out of sequence when all the sessions are considered. For example, with a *cache* setting of 10, session A might reserve values 1..10 and return `nextval`=1, then session B might reserve values 11..20 and return `nextval`=11 before session A has generated `nextval`=2. Thus, with a *cache* setting of one it is safe to assume that `nextval` values are generated sequentially; with a *cache* setting greater than one you should only assume that the `nextval` values are all distinct, not that they are generated purely sequentially. Also, `last_value` will reflect the latest value reserved by any session, whether or not it has yet been returned by `nextval`.

Another consideration is that a `setval` executed on such a sequence will not be noticed by other sessions until they have used up any preallocated values they have cached.

## Examples
<a name="create-sequence-examples"></a>

Create an ascending sequence called `serial`, starting at 101:

```
CREATE SEQUENCE serial CACHE 65536 START 101;
```

Select the next number from this sequence:

```
SELECT nextval('serial');

 nextval
---------
     101
```

Select the next number from this sequence:

```
SELECT nextval('serial');

 nextval
---------
     102
```

Use this sequence in an `INSERT` command:

```
INSERT INTO distributors VALUES (nextval('serial'), 'nothing');
```

Reset the sequence to a specific value using `setval`:

```
SELECT setval('serial', 200);
SELECT nextval('serial');

 nextval
---------
     201
```

## Compatibility
<a name="create-sequence-compatibility"></a>

`CREATE SEQUENCE` conforms to the SQL standard, with the following exceptions:
+ Obtaining the next value is done using the `nextval()` function instead of the standard's `NEXT VALUE FOR` expression.
+ The `OWNED BY` clause is a PostgreSQL extension.

# `ALTER SEQUENCE`
<a name="alter-sequence-syntax-support"></a>

`ALTER SEQUENCE` — change the definition of a sequence generator.

**Important**  
When using sequences, the cache value should be carefully considered. For more information, see the Important callout on the [`CREATE SEQUENCE`](create-sequence-syntax-support.md) page.  
For guidance on how best to use sequences based on workload patterns, see [Working with sequences and identity columns](sequences-identity-columns-working-with.md).

## Supported syntax
<a name="alter-sequence-supported-syntax"></a>

```
ALTER SEQUENCE [ IF EXISTS ] name
    [ INCREMENT [ BY ] increment ]
    [ MINVALUE minvalue | NO MINVALUE ] [ MAXVALUE maxvalue | NO MAXVALUE ]
    [ [ NO ] CYCLE ]
    [ START [ WITH ] start ]
    [ RESTART [ [ WITH ] restart ] ]
    [ CACHE cache ]
    [ OWNED BY { table_name.column_name | NONE } ]
ALTER SEQUENCE [ IF EXISTS ] name OWNER TO { new_owner | CURRENT_ROLE | CURRENT_USER | SESSION_USER }
ALTER SEQUENCE [ IF EXISTS ] name RENAME TO new_name
ALTER SEQUENCE [ IF EXISTS ] name SET SCHEMA new_schema

where cache is 1 or cache >= 65536
```

## Description
<a name="alter-sequence-description"></a>

`ALTER SEQUENCE` changes the parameters of an existing sequence generator. Any parameters not specifically set in the `ALTER SEQUENCE` command retain their prior settings.

You must own the sequence to use `ALTER SEQUENCE`. To change a sequence's schema, you must also have `CREATE` privilege on the new schema. To alter the owner, you must be able to `SET ROLE` to the new owning role, and that role must have `CREATE` privilege on the sequence's schema. (These restrictions enforce that altering the owner doesn't do anything you couldn't do by dropping and recreating the sequence. However, a superuser can alter ownership of any sequence anyway.)

## Parameters
<a name="alter-sequence-parameters"></a>

***name***  
The name (optionally schema-qualified) of a sequence to be altered.

**`IF EXISTS`**  
Do not throw an error if the sequence does not exist. A notice is issued in this case.

***increment***  
The clause `INCREMENT BY increment` is optional. A positive value will make an ascending sequence, a negative one a descending sequence. If unspecified, the old increment value will be maintained.

***minvalue* / `NO MINVALUE`**  
The optional clause `MINVALUE minvalue` determines the minimum value a sequence can generate. If `NO MINVALUE` is specified, the defaults of 1 and the minimum value of the data type for ascending and descending sequences, respectively, will be used. If neither option is specified, the current minimum value will be maintained.

***maxvalue* / `NO MAXVALUE`**  
The optional clause `MAXVALUE maxvalue` determines the maximum value for the sequence. If `NO MAXVALUE` is specified, the defaults of the maximum value of the data type and -1 for ascending and descending sequences, respectively, will be used. If neither option is specified, the current maximum value will be maintained.

**`CYCLE`**  
The optional `CYCLE` key word can be used to enable the sequence to wrap around when the *maxvalue* or *minvalue* has been reached by an ascending or descending sequence respectively. If the limit is reached, the next number generated will be the *minvalue* or *maxvalue*, respectively.

**`NO CYCLE`**  
If the optional `NO CYCLE` key word is specified, any calls to `nextval` after the sequence has reached its maximum value will return an error. If neither `CYCLE` or `NO CYCLE` are specified, the old cycle behavior will be maintained.

***start***  
The optional clause `START WITH start` changes the recorded start value of the sequence. This has no effect on the current sequence value; it simply sets the value that future `ALTER SEQUENCE RESTART` commands will use.

***restart***  
The optional clause `RESTART [ WITH restart ]` changes the current value of the sequence. This is similar to calling the `setval` function with `is_called` = `false`: the specified value will be returned by the next call of `nextval`. Writing `RESTART` with no *restart* value is equivalent to supplying the start value that was recorded by `CREATE SEQUENCE` or last set by `ALTER SEQUENCE START WITH`.  
In contrast to a `setval` call, a `RESTART` operation on a sequence is transactional and blocks concurrent transactions from obtaining numbers from the same sequence. If that's not the desired mode of operation, `setval` should be used.

***cache***  
The clause `CACHE cache` enables sequence numbers to be preallocated and stored in memory for faster access. The value must be either 1 or some value >= 65536. If unspecified, the old cache value will be maintained. For more information about cache behavior, see the guidance under [`CREATE SEQUENCE`](create-sequence-syntax-support.md).

**`OWNED BY table_name.column_name` / `OWNED BY NONE`**  
The `OWNED BY` option causes the sequence to be associated with a specific table column, such that if that column (or its whole table) is dropped, the sequence will be automatically dropped as well. If specified, this association replaces any previously specified association for the sequence. The specified table must have the same owner and be in the same schema as the sequence. Specifying `OWNED BY NONE` removes any existing association, making the sequence "free-standing".

***new\$1owner***  
The user name of the new owner of the sequence.

***new\$1name***  
The new name for the sequence.

***new\$1schema***  
The new schema for the sequence.

## Notes
<a name="alter-sequence-notes"></a>

`ALTER SEQUENCE` will not immediately affect `nextval` results in backends, other than the current one, that have preallocated (cached) sequence values. They will use up all cached values prior to noticing the changed sequence generation parameters. The current backend will be affected immediately.

`ALTER SEQUENCE` does not affect the `currval` status for the sequence.

`ALTER SEQUENCE` may cause other transactions to OCC.

For historical reasons, `ALTER TABLE` can be used with sequences too; but the only variants of `ALTER TABLE` that are allowed with sequences are equivalent to the forms shown above.

## Examples
<a name="alter-sequence-examples"></a>

Restart a sequence called `serial`, at 105:

```
ALTER SEQUENCE serial RESTART WITH 105;
```

## Compatibility
<a name="alter-sequence-compatibility"></a>

`ALTER SEQUENCE` conforms to the SQL standard, except for the `AS`, `START WITH`, `OWNED BY`, `OWNER TO`, `RENAME TO`, and `SET SCHEMA` clauses, which are PostgreSQL extensions.

# `DROP SEQUENCE`
<a name="drop-sequence-syntax-support"></a>

`DROP SEQUENCE` — remove a sequence.

## Supported syntax
<a name="drop-sequence-supported-syntax"></a>

```
DROP SEQUENCE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
```

## Description
<a name="drop-sequence-description"></a>

`DROP SEQUENCE` removes sequence number generators. A sequence can only be dropped by its owner or a superuser.

## Parameters
<a name="drop-sequence-parameters"></a>

**`IF EXISTS`**  
Do not throw an error if the sequence does not exist. A notice is issued in this case.

***name***  
The name (optionally schema-qualified) of a sequence.

**`CASCADE`**  
Automatically drop objects that depend on the sequence, and in turn all objects that depend on those objects.

**`RESTRICT`**  
Refuse to drop the sequence if any objects depend on it. This is the default.

## Examples
<a name="drop-sequence-examples"></a>

To remove the sequence `seq`:

```
DROP SEQUENCE seq;
```

## Compatibility
<a name="drop-sequence-compatibility"></a>

`DROP SEQUENCE` conforms to the SQL standard, except that the standard only allows one sequence to be dropped per command, and apart from the `IF EXISTS` option, which is a PostgreSQL extension.

# `CREATE VIEW`
<a name="create-view"></a>

`CREATE VIEW` defines a new persistent view. Aurora DSQL does not support temporary views; only permanent views are supported.

## Supported syntax
<a name="create-view-supported-syntax"></a>

```
CREATE [ OR REPLACE ] [ RECURSIVE ] VIEW name [ ( column_name [, ...] ) ]
    [ WITH ( view_option_name [= view_option_value] [, ... ] ) ]
    AS query
    [ WITH [ CASCADED | LOCAL ] CHECK OPTION ]
```

## Description
<a name="create-view-description"></a>

`CREATE VIEW` defines a view of a query. The view is not physically materialized. Instead, the query is run every time the view is referenced in a query. 

`CREATE or REPLACE VIEW` is similar, but if a view of the same name already exists, it is replaced. The new query must generate the same columns that were generated by the existing view query (that is, the same column names in the same order and with the same data types), but it may add additional columns to the end of the list. The calculations giving rise to the output columns may be different. 

If a schema name is given, such as `CREATE VIEW myschema.myview ...`) then the view is created in the specified schema. Otherwise, it is created in the current schema. 

The name of the view must be distinct from the name of any other relation (table, index, view) in the same schema. 

## Parameters
<a name="create-view-parameters"></a>

`CREATE VIEW` supports various parameters to control the behavior of automatically updatable views.

**`RECURSIVE`**  
Creates a recursive view. The syntax: `CREATE RECURSIVE VIEW [ schema . ] view_name (column_names) AS SELECT ...;` is equivalent to `CREATE VIEW [ schema . ] view_name AS WITH RECURSIVE view_name (column_names) AS (SELECT ...) SELECT column_names FROM view_name;`.   
A view column name list must be specified for a recursive view.

**`name`**  
The name of the view to be created, which may be optionally schema-qualified. A column name list must be specified for a recursive view. 

**`column_name`**  
An optional list of names to be used for columns of the view. If not given, the column names are deduced from the query.

**`WITH ( view_option_name [= view_option_value] [, ... ] )`**  
This clause specifies optional parameters for a view; the following parameters are supported.  
+ `check_option (enum)` — This parameter may be either `local` or `cascaded`, and is equivalent to specifying `WITH [ CASCADED | LOCAL ] CHECK OPTION`.
+ `security_barrier (boolean)`—This should be used if the view is intended to provide row-level security. Aurora DSQL does not currently support row-level security, but this option will still force the view's `WHERE` conditions (and any conditions using operators which are marked as `LEAKPROOF`) to be evaluated first.
+ `security_invoker (boolean)`—This option causes the underlying base relations to be checked against the privileges of the user of the view rather than the view owner. See the notes below for full details.
All of the above options can be changed on existing views using `ALTER VIEW`.

**`query`**  
A `SELECT` or `VALUES` command which will provide the columns and rows of the view.

**`WITH [ CASCADED | LOCAL ] CHECK OPTION`**  
This option controls the behavior of automatically updatable views. When this option is specified, `INSERT` and `UPDATE` commands on the view will be checked to ensure that new rows satisfy the view-defining condition (that is, the new rows are checked to ensure that they are visible through the view). If they are not, the update will be rejected. If the `CHECK OPTION` is not specified, `INSERT` and `UPDATE` commands on the view are allowed to create rows that are not visible through the view.  
`LOCAL`—New rows are only checked against the conditions defined directly in the view itself. Any conditions defined on underlying base views are not checked (unless they also specify the `CHECK OPTION`).  
`CASCADED`—New rows are checked against the conditions of the view and all underlying base views. If the `CHECK OPTION` is specified, and neither `LOCAL` nor `CASCADED` are specified, then `CASCADED` is assumed.   
The `CHECK OPTION` may not be used with `RECURSIVE` views. The `CHECK OPTION` is only supported on views that are automatically updatable.

## Notes
<a name="create-view-notes"></a>

Use the `DROP VIEW` statement to drop views. 

The names and data types of the view's columns should be carefully considered. For example, CREATE VIEW vista AS SELECT 'Hello World'; is not recommended because the column name defaults to `?column?;`. Also, the column data type defaults to `text`, which might not be what you wanted. 

A better approach is to explicitly specify the column name and data type, such as: `CREATE VIEW vista AS SELECT text 'Hello World' AS hello;`. 

By default, access to the underlying base relations referenced in the view is determined by the permissions of the view owner. In some cases, this can be used to provide secure but restricted access to the underlying tables. However, not all views are secure against tampering.
+ If the view has the `security_invoker` property set to true, access to the underlying base relations is determined by the permissions of the user executing the query, rather than the view owner. Thus, the user of a security invoker view must have the relevant permissions on the view and its underlying base relations.
+ If any of the underlying base relations is a security invoker view, it will be treated as if it had been accessed directly from the original query. Thus, a security invoker view will always check its underlying base relations using the permissions of the current user, even if it is accessed from a view without the `security_invoker` property.
+ Functions called in the view are treated the same as if they had been called directly from the query using the view. Therefore, the user of a view must have permissions to call all functions used by the view. Functions in the view are executed with the privileges of the user executing the query or the function owner, depending on whether the functions are defined as `SECURITY INVOKER` or `SECURITY DEFINER`.
+ The user creating or replacing a view must have `USAGE` privileges on any schemas referred to in the view query, in order to look up the referenced objects in those schemas.
+ When `CREATE OR REPLACE VIEW` is used on an existing view, only the view's defining `SELECT` rule, plus any `WITH ( ... )` parameters and its `CHECK OPTION` are changed. Other view properties, including ownership, permissions, and non-SELECT rules, remain unchanged. You must own the view to replace it (this includes being a member of the owning role).

## Updatable views
<a name="create-view-updatable-view"></a>

Simple views are automatically updatable: the system will allow `INSERT`, `UPDATE`, and `DELETE` statements to be used on the view in the same way as on a regular table. A view is automatically updatable if it satisfies all of the following conditions:
+ The view must have exactly one entry in its `FROM` list, which must be a table or another updatable view.
+ The view definition must not contain `WITH`, `DISTINCT`, `GROUP BY`, `HAVING`, `LIMIT`, or `OFFSET` clauses at the top level.
+ The view definition must not contain set operations (`UNION`, `INTERSECT`, or `EXCEPT`) at the top level.
+ The view's select list must not contain any aggregates, window functions, or set-returning functions.

An automatically updatable view may contain a mix of updatable and non-updatable columns. A column is updatable if it's a simple reference to an updatable column of the underlying base relation. Otherwise, the column is read-only, and an error occurs if an `INSERT` or `UPDATE` statement attempts to assign a value to it.

A more complex view that doesn't satisfy all these conditions is read-only by default: the system doesn't allow an insert, update, or delete on the view.

**Note**  
The user performing the insert, update, or delete on the view must have the corresponding insert, update, or delete privilege on the view. By default, the view's owner must have the relevant privileges on the underlying base relations, while the user performing the update doesn't need any permissions on the underlying base relations. However, if the view has security\$1invoker set to true, the user performing the update, rather than the view owner, must have the relevant privileges on the underlying base relations.

## Examples
<a name="create-view-examples"></a>

To create a view consisting of all comedy films.

```
CREATE VIEW comedies AS
    SELECT *
    FROM films
    WHERE kind = 'Comedy';
```

Create a view with `LOCAL CHECK OPTION`.

```
CREATE VIEW pg_comedies AS
    SELECT *
    FROM comedies
    WHERE classification = 'PG'
    WITH CASCADED CHECK OPTION;
```

Create a recursive view.

```
CREATE RECURSIVE VIEW public.nums_1_100 (n) AS
    VALUES (1)
UNION ALL
    SELECT n+1 FROM nums_1_100 WHERE n < 100;
```

## Compatibility
<a name="create-view-compatibility"></a>

`CREATE OR REPLACE VIEW` is a PostgreSQL language extension. The `WITH ( ... )` clause is an extension as well, as are security barrier views and security invoker views. Aurora DSQL supports these language extensions. 

# `ALTER VIEW`
<a name="alter-view-syntax-support"></a>

The `ALTER VIEW` statement allows changing various properties of an existing view, and Aurora DSQL supports all the PostgreSQL syntax for this command.

## Supported syntax
<a name="alter-view-supported-syntax"></a>

```
ALTER VIEW [ IF EXISTS ] name ALTER [ COLUMN ] column_name SET DEFAULT expression
ALTER VIEW [ IF EXISTS ] name ALTER [ COLUMN ] column_name DROP DEFAULT
ALTER VIEW [ IF EXISTS ] name OWNER TO { new_owner | CURRENT_ROLE | CURRENT_USER | SESSION_USER }
ALTER VIEW [ IF EXISTS ] name RENAME [ COLUMN ] column_name TO new_column_name
ALTER VIEW [ IF EXISTS ] name RENAME TO new_name
ALTER VIEW [ IF EXISTS ] name SET SCHEMA new_schema
ALTER VIEW [ IF EXISTS ] name SET ( view_option_name [= view_option_value] [, ... ] )
ALTER VIEW [ IF EXISTS ] name RESET ( view_option_name [, ... ] )
```

## Description
<a name="alter-view-description"></a>

`ALTER VIEW` changes various auxiliary properties of a view. (If you want to modify the view's defining query, use `CREATE OR REPLACE VIEW`.) You must own the view to use `ALTER VIEW`. To change a view's schema, you must also have `CREATE` privilege on the new schema. To alter the owner, you must be able to `SET ROLE` to the new owning role, and that role must have `CREATE` privilege on the view's schema.

## Parameters
<a name="alter-view-parameters"></a>

**`name`**  
The name (optionally schema-qualified) of an existing view. 

**`column_name`**  
Name of an existing column, or new name for an existing column.

**`IF EXISTS`**  
Don't throw an error if the view doesn't exist. A notice is issued in this case.

**`SET/DROP DEFAULT`**  
These forms set or remove the default value for a column. The default value for a view column is substituted into any `INSERT` or `UPDATE` command where the target is the view.

**`new_owner`**  
The user name of the new owner of the view.

**`new_name`**  
The new name for the view.

**`new_schema`**  
The new schema for the view.

**`SET ( view_option_name [= view_option_value] [, ... ] )`**  
Sets a view option. The following are supported options:  
+ `check_option (enum)` - Changes the check option of the view. The value must be `local` or `cascaded`.
+ `security_barrier (boolean)` - Changes the security-barrier property of the view.
+ `security_invoker (boolean)` - Changes the security-invoker property of the view.

**`RESET ( view_option_name [, ... ] )`**  
Resets a view option to its default value.

## Examples
<a name="alter-view-examples"></a>

Renaming the view `foo` to `bar`:

```
ALTER VIEW foo RENAME TO bar;
```

Attaching a default column value to an updatable view:

```
CREATE TABLE base_table (id int, ts timestamptz);
CREATE VIEW a_view AS SELECT * FROM base_table;
ALTER VIEW a_view ALTER COLUMN ts SET DEFAULT now();
INSERT INTO base_table(id) VALUES(1);  -- ts will receive a NULL
INSERT INTO a_view(id) VALUES(2);  -- ts will receive the current time
```

## Compatibility
<a name="alter-view-compatibility"></a>

`ALTER VIEW` is a PostgreSQL extension of the SQL standard that Aurora DSQL supports.

# `DROP VIEW`
<a name="drop-view-overview"></a>

The `DROP VIEW` statement removes an existing view. Aurora DSQL supports the full PostgreSQL syntax for this command.

## Supported syntax
<a name="drop-view-supported-syntax"></a>

```
DROP VIEW [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
```

## Description
<a name="drop-view-description"></a>

`DROP VIEW` drops an existing view. To execute this command you must be the owner of the view.

## Parameters
<a name="drop-view-parameters"></a>

**`IF EXISTS`**  
Don't throw an error if the view doesn't exist. A notice is issued in this case.

**`name`**  
The name (optionally schema-qualified) of the view to remove.

**`CASCADE`**  
Automatically drop objects that depend on the view (such as other views), and in turn all objects that depend on those objects.

**`RESTRICT`**  
Refuse to drop the view if any objects depend on it. This is the default.

## Examples
<a name="drop-view-examples"></a>

```
DROP VIEW kinds;
```

## Compatibility
<a name="drop-view-compatibility"></a>

This command conforms to the SQL standard, except that the standard only allows one view to be dropped per command, and apart from the `IF EXISTS` option, which is a PostgreSQL extension that Aurora DSQL supports.

# Migrating from PostgreSQL to Aurora DSQL
<a name="working-with-postgresql-compatibility-migration-guide"></a>

Aurora DSQL is designed to be [PostgreSQL compatible](working-with-postgresql-compatibility.md), supporting core relational features such as ACID transactions, secondary indexes, joins, and standard DML operations. Most existing PostgreSQL applications can migrate to Aurora DSQL with minimal changes.

This section provides practical guidance for migrating your application to Aurora DSQL, including framework compatibility, migration patterns, and architectural considerations.

## Framework and ORM compatibility
<a name="dsql-framework-compatibility"></a>

 Aurora DSQL uses the standard PostgreSQL wire protocol, ensuring compatibility with PostgreSQL drivers and frameworks. Most popular ORMs work with Aurora DSQL with minimal or no changes. See [Aurora DSQL adapters and dialects](aws-sdks.md#aurora-dsql-adapters) for reference implementations and available ORM integrations. 

## Common migration patterns
<a name="working-with-postgresql-compatibility-migration-considerations"></a>

 When migrating from PostgreSQL to Aurora DSQL, some features work differently or have alternative syntax. This section provides guidance on common migration scenarios. 

### DDL operation alternatives
<a name="dsql-ddl-alternatives"></a>

Aurora DSQL provides modern alternatives to traditional PostgreSQL DDL operations:

**Index creation**  
Use `CREATE INDEX ASYNC` instead of `CREATE INDEX` for non-blocking index creation.  
**Benefit:** Zero-downtime index creation on large tables.

**Data removal**  
Use `DELETE FROM table_name` instead of `TRUNCATE`.  
**Alternative:** For complete table recreation, use `DROP TABLE` followed by `CREATE TABLE`.

**System configuration**  
Aurora DSQL is fully managed, so configuration is handled automatically based on workload patterns. Use the AWS Management Console or API to manage cluster settings.  
**Benefit:** No need for database tuning or parameter management.

### Schema design patterns
<a name="dsql-schema-design-patterns"></a>

Adapt these common PostgreSQL patterns for Aurora DSQL compatibility:

**Referential integrity patterns**  
Aurora DSQL supports table relationships and `JOIN` operations. For referential integrity, implement validation in your application layer. This design aligns with modern distributed database patterns where application-layer validation provides more flexibility and avoids performance bottlenecks from cascading operations.  
**Pattern:** Implement referential integrity checks in your application layer using consistent naming conventions, validation logic, and transaction boundaries. Many high-scale applications prefer this approach for better control over error handling and performance.

**Temporary data handling**  
Use CTEs, subqueries, or regular tables with cleanup logic instead of temporary tables.  
**Alternative:** Create tables with session-specific names and clean them up in your application.

## Understanding architectural differences
<a name="working-with-postgresql-compatibility-architectural-differences"></a>

Aurora DSQL's distributed, serverless architecture intentionally differs from traditional PostgreSQL in several areas. These differences enable Aurora DSQL's key benefits of simplicity and scale.

### Simplified database model
<a name="dsql-simplified-database-model"></a>

**Single database per cluster**  
Aurora DSQL provides one built-in database named `postgres` per cluster.  
**Migration tip:** If your application uses multiple databases, create separate Aurora DSQL clusters for logical separation, or use schemas within a single cluster.

**No temporary tables**  
 For temporary data handling, you SHOULD use common table expressions (CTEs) and subqueries, which provide flexible alternatives for complex queries.   
 **Alternative:** Use CTEs with `WITH` clauses for temporary result sets, or regular tables with unique naming for session-specific data. 

**Automatic storage management**  
Aurora DSQL eliminates tablespaces and manual storage management. Storage automatically scales and optimizes based on your data patterns.  
**Benefit:** No need to monitor disk space, plan storage allocation, or manage tablespace configurations.

### Modern application patterns
<a name="dsql-modern-application-patterns"></a>

Aurora DSQL encourages modern application development patterns that improve maintainability and performance:

**Application-level logic instead of database triggers**  
For trigger-like functionality, implement event-driven logic in your application layer.  
**Migration strategy:** Move trigger logic to application code, use event-driven architectures with AWS services like EventBridge, or implement audit trails using application logging.

**SQL functions for data processing**  
Aurora DSQL supports SQL-based functions but not procedural languages like PL/pgSQL.  
**Alternative:** Use SQL functions for data transformations, or move complex logic to your application layer or AWS Lambda functions.

**Optimistic concurrency control instead of pessimistic locking**  
Aurora DSQL uses optimistic concurrency control (OCC), a lock-free approach that differs from traditional database locking mechanisms. Instead of acquiring locks that block other transactions, Aurora DSQL allows transactions to proceed without blocking and detects conflicts at commit time. This eliminates deadlocks and prevents slow transactions from blocking other operations.  
**Key difference:** When conflicts occur, Aurora DSQL returns a serialization error rather than making transactions wait for locks. This requires applications to implement retry logic, similar to handling lock timeouts in traditional databases, but conflicts are resolved immediately rather than causing blocking waits.  
**Design pattern:** Implement idempotent transaction logic with retry mechanisms. Design schemas to minimize contention by using random primary keys and spreading updates across your key range. For details, see [Concurrency control in Aurora DSQL](working-with-concurrency-control.md).

**Relationships and referential integrity**  
 Aurora DSQL supports foreign key relationships between tables, including ` JOIN ` operations. For referential integrity, implement validation in your application layer. While enforcing referential integrity can be valuable, cascading operations (like cascading deletes) can create unexpected performance issues—for example, deleting an order with 1,000 line items becomes a 1,001-row transaction. Many customers avoid foreign key constraints for this reason.   
**Design pattern:** Implement referential integrity checks in your application layer, use eventual consistency patterns, or leverage AWS services for data validation.

### Operational simplifications
<a name="dsql-operational-simplifications"></a>

Aurora DSQL eliminates many traditional database maintenance tasks, reducing operational overhead:

**No manual maintenance required**  
Aurora DSQL automatically manages storage optimization, statistics collection, and performance tuning. Traditional maintenance commands like `VACUUM` are handled by the system.  
**Benefit:** Eliminates the need for database maintenance windows, vacuum scheduling, and system parameter tuning.

**Automatic partitioning and scaling**  
Aurora DSQL automatically partitions and distributes your data based on access patterns. Use UUIDs or application-generated IDs for optimal distribution.  
**Migration tip:** Remove manual partitioning logic and let Aurora DSQL handle data distribution. Use UUIDs or application-generated IDs for optimal distribution. If your application requires sequential identifiers, see [Sequences and identity columns](sequences-identity-columns.md).

# Agentic migration with AI tools
<a name="dsql-agentic-migration"></a>

AI coding agents can accelerate your migration to Aurora DSQL by analyzing schemas, transforming code, and executing DDL migrations with built-in safety checks.

## Using Kiro for migration
<a name="dsql-kiro-migration"></a>

Coding agents such as [Kiro](https://kiro.dev/) can help you analyze and migrate your PostgreSQL code to Aurora DSQL:
+ **Schema analysis:** Upload your existing schema files and ask Kiro to identify potential compatibility issues and suggest alternatives
+ **Code transformation:** Provide your application code and ask Kiro to help refactor trigger logic, replace sequences with UUIDs, or modify transaction patterns
+ **Migration planning:** Ask Kiro to create a step-by-step migration plan based on your specific application architecture
+ **DDL migrations:** Execute schema modifications using the table recreation pattern with built-in safety checks and user verification

**Example prompts:**

```
"Analyze this PostgreSQL schema for DSQL compatibility and suggest alternatives for any unsupported features"

"Help me refactor this trigger function into application-level logic for DSQL migration"

"Create a migration checklist for moving my Django application from PostgreSQL to DSQL"

"Drop the legacy_status column from the orders table"

"Change the price column from VARCHAR to DECIMAL in the products table"
```

## DDL migration with table recreation
<a name="dsql-ddl-migration-pattern"></a>

When using AI agents with the Aurora DSQL MCP server, certain ALTER TABLE operations use a *table recreation pattern* that safely migrates your data. The agent handles the complexity while keeping you informed at each step.

The following operations use the table recreation pattern:


| Operation | Approach | 
| --- | --- | 
| DROP COLUMN | Exclude column from new table | 
| ALTER COLUMN TYPE | Cast data type during migration | 
| ALTER COLUMN SET/DROP NOT NULL | Change constraint in new table definition | 
| ALTER COLUMN SET/DROP DEFAULT | Define default in new table definition | 
| ADD/DROP CONSTRAINT | Include or remove constraint in new table | 
| MODIFY PRIMARY KEY | Define new PK with uniqueness validation | 
| Split/Merge columns | Use SPLIT\$1PART, SUBSTRING, or CONCAT | 

The following ALTER TABLE operations are supported directly without table recreation:
+ `ALTER TABLE ... RENAME COLUMN` – Rename a column
+ `ALTER TABLE ... RENAME TO` – Rename a table
+ `ALTER TABLE ... ADD COLUMN` – Add a new column

**Safety features:** When executing DDL migrations, AI agents present the migration plan, verify data compatibility, confirm row counts, and request explicit approval before any destructive operations like DROP TABLE.

**Batched migrations:** For tables exceeding 3,000 rows, the agent automatically batches the migration in increments of 500-1,000 rows to stay within transaction limits.

## Aurora DSQL MCP server
<a name="dsql-mcp-tools"></a>

The Aurora DSQL Model Context Protocol (MCP) server allows AI assistants to connect directly to your Aurora DSQL cluster and search Aurora DSQL documentation. This enables the AI to:
+ Analyze your existing schema and suggest migration changes
+ Execute DDL migrations with the table recreation pattern
+ Test queries and verify compatibility during migration
+ Provide accurate, up-to-date guidance based on the latest Aurora DSQL documentation

 To use the Aurora DSQL MCP server with AI assistants, see the setup instructions for the [ Aurora DSQL MCP server](SECTION_aurora-dsql-mcp-server.md). 

## Aurora DSQL considerations for PostgreSQL compatibility
<a name="working-with-postgresql-compatibility-unsupported-limitations"></a>

Aurora DSQL has feature support differences from self-managed PostgreSQL that enable its distributed architecture, serverless operation, and automatic scaling. Most applications work within these differences without modification.

For general considerations, see [Considerations for working with Amazon Aurora DSQL](considerations.md). For quotas and limits, see [Cluster quotas and database limits in Amazon Aurora DSQL](CHAP_quotas.md).
+ Aurora DSQL uses a single built-in database named `postgres` per cluster. For logical separation, create separate Aurora DSQL clusters or use schemas within a single cluster.
+ The `postgres` database uses UTF-8 character encoding, which provides broad international character support.
+ The database uses the `C` collation only.
+ Aurora DSQL uses `UTC` as the system timezone. Postgres stores all timezone-aware dates and times internally in UTC. You can set the `TimeZone` configuration parameter to convert how it is displayed to the client and serve as the default for client input that the server will use to convert to UTC internally.
+ The transaction isolation level is fixed at PostgreSQL `Repeatable Read`.
+ Transactions have the following constraints:
  + DDL and DML operations require separate transactions
  + A transaction can include only 1 DDL statement
  + A transaction can modify up to 3,000 rows, regardless of the number of secondary indexes
  + The 3,000-row limit applies to all DML statements (`INSERT`, `UPDATE`, `DELETE`)
+ Database connections time out after 1 hour.
+ Aurora DSQL manages permissions through schema-level grants. Admin users create schemas using `CREATE SCHEMA` and grant access using `GRANT USAGE ON SCHEMA`. Admin users manage objects in the public schema, while non-admin users create objects in user-created schemas for clear ownership boundaries. For more information, see [Authorizing database roles to use SQL in your database](using-database-and-iam-roles.md#using-database-and-iam-roles-custom-database-roles-sql).

## Need help with migration?
<a name="dsql-migration-feedback-link"></a>

If you encounter features that are critical for your migration but not currently supported in Aurora DSQL, see [Providing feedback on Amazon Aurora DSQL](providing-feedback.md) for information on how to share feedback with AWS.