

# Migrating T-SQL features
T-SQL

This topic provides conceptual content comparing various features and functionalities between Microsoft SQL Server 2019 and Amazon Aurora PostgreSQL. You can gain valuable insights into the differences and similarities between these two database systems, which is crucial for planning and executing a successful migration. The content covers a wide range of topics, including data types, cursors, stored procedures, error handling, full-text search, and more. By understanding these concepts, database administrators and developers can anticipate challenges, identify potential workarounds, and make informed decisions when transitioning their databases and applications from SQL Server to Aurora PostgreSQL. This knowledge enables smoother migrations and helps maintain data integrity and functionality in the new PostgreSQL environment.

**Topics**
+ [

# Service Broker functionality for T-SQL
](chap-sql-server-aurora-pg.tsql.servicebroker.md)
+ [

# SQL Server cast and convert for T-SQL
](chap-sql-server-aurora-pg.tsql.castconvert.md)
+ [

# Common Language Runtime for T-SQL
](chap-sql-server-aurora-pg.tsql.clr.md)
+ [

# Collations for T-SQL
](chap-sql-server-aurora-pg.tsql.collations.md)
+ [

# Cursors for T-SQL
](chap-sql-server-aurora-pg.tsql.cursors.md)
+ [

# Date and time functions for T-SQL
](chap-sql-server-aurora-pg.tsql.datetime.md)
+ [

# String functions for T-SQL
](chap-sql-server-aurora-pg.tsql.stringfunctions.md)
+ [

# Databases and schemas for T-SQL
](chap-sql-server-aurora-pg.tsql.schemas.md)
+ [

# Dynamic SQL for T-SQL
](chap-sql-server-aurora-pg.tsql.dynamicsql.md)
+ [

# Transactions for T-SQL
](chap-sql-server-aurora-pg.tsql.transactions.md)
+ [

# Synonyms for T-SQL
](chap-sql-server-aurora-pg.tsql.synonyms.md)
+ [

# Delete and update from for T-SQL
](chap-sql-server-aurora-pg.tsql.delete.md)
+ [

# Stored procedures for T-SQL
](chap-sql-server-aurora-pg.tsql.storedprocedures.md)
+ [

# Error handling for T-SQL
](chap-sql-server-aurora-pg.tsql.errorhandling.md)
+ [

# Flow control for T-SQL
](chap-sql-server-aurora-pg.tsql.flowcontrol.md)
+ [

# Full-text search for T-SQL
](chap-sql-server-aurora-pg.tsql.fulltextsearch.md)
+ [

# SQL server graph features for T-SQL
](chap-sql-server-aurora-pg.tsql.graph.md)
+ [

# JSON and XML for T-SQL
](chap-sql-server-aurora-pg.tsql.json.md)
+ [

# Merge for T-SQL
](chap-sql-server-aurora-pg.tsql.merge.md)
+ [

# Pivot and unpivot for T-SQL
](chap-sql-server-aurora-pg.tsql.pivot.md)
+ [

# Triggers for T-SQL
](chap-sql-server-aurora-pg.tsql.triggers.md)
+ [

# Top fetch for T-SQL
](chap-sql-server-aurora-pg.tsql.topfetch.md)
+ [

# User-defined functions for T-SQL
](chap-sql-server-aurora-pg.tsql.udf.md)
+ [

# User-defined types for T-SQL
](chap-sql-server-aurora-pg.tsql.udt.md)
+ [

# Identity and sequences for T-SQL
](chap-sql-server-aurora-pg.tsql.sequences.md)

# Service Broker functionality for T-SQL


This topic provides reference information about migrating from Microsoft SQL Server 2019’s Service Broker functionality to Amazon Aurora PostgreSQL. You can understand the challenges and alternatives available when moving from SQL Server’s native messaging and queuing capabilities to Aurora PostgreSQL, which doesn’t offer a direct equivalent. The topic explores how you can achieve similar functionality using a combination of AWS services, including DB Links, AWS Lambda, and Amazon SQS.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[No compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-0.png)   |   ![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-0.png)   |   [Service Broker](chap-sql-server-aurora-pg.tools.actioncode.md#chap-sql-server-aurora-pg.tools.actioncode.servicebroker)   |  Use Amazon Lambda for similar functionality.  | 

## SQL Server Usage


SQL Server Service Broker provides native support for messaging and queuing applications. Developers use Server Broker to create complex applications that use the database engine components to communicate between several SQL Server databases. Developers can use Service Broker to easily build distributed and more reliable applications.

Benefits of using messaging queues:
+ Decouple dependencies between applications by communicating through messages.
+ Scale out your architecture by moving queues or message processors to separate servers as needed.
+ Maintain individual parts with a minimal impact to the end users.
+ Control when the messages are processed, for example, off-peak hours.
+ Process queued messages on multiple servers or processes or threads.

The following sections describe the Service Broker commands.

### CREATE MESSAGE TYPE


The following example creates a message with name and structure.

```
CREATE MESSAGE TYPE message_type_name
  [ AUTHORIZATION owner_name ]
  [ VALIDATION = { NONE
    | EMPTY
    | WELL_FORMED_XML
    | VALID_XML WITH SCHEMA COLLECTION schema_collection_name
  } ]
[ ; ]
```

For more information, see [CREATE MESSAGE TYPE (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/statements/create-message-type-transact-sql?view=sql-server-2017) in the *SQL Server documentation*.

### CREATE QUEUE


The following example creates a queue to store messages.

```
CREATE QUEUE <object>
  [ WITH
    [ STATUS = { ON | OFF } [ , ] ]
    [ RETENTION = { ON | OFF } [ , ] ]
    [ ACTIVATION (
      [ STATUS = { ON | OFF } , ]
        PROCEDURE_NAME = <procedure> ,
        MAX_QUEUE_READERS = max_readers ,
        EXECUTE AS { SELF | 'user_name' | OWNER }
        ) [ , ] ]
    [ POISON_MESSAGE_HANDLING (
      [ STATUS = { ON | OFF } ] ) ]
    ]
      [ ON { filegroup | [ DEFAULT ] } ]
[ ; ]

<object> ::=
{
  [ database_name. [ schema_name ] . | schema_name. ]
    queue_name
}

<procedure> ::=
{
  [ database_name. [ schema_name ] . | schema_name. ]
    stored_procedure_name
}
```

For more information, see [CREATE QUEUE (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/statements/create-queue-transact-sql?view=sql-server-2017) in the *SQL Server documentation*.

### CREATE CONTRACT


The following example specifies the role and what type of messages a service can handle.

```
CREATE CONTRACT contract_name
  [ AUTHORIZATION owner_name ]
    ( { { message_type_name | [ DEFAULT ] }
      SENT BY { INITIATOR | TARGET | ANY }
    } [ ,...n] )
[ ; ]
```

For more information, see [CREATE CONTRACT (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/statements/create-contract-transact-sql?view=sql-server-2017) in the *SQL Server documentation*.

### CREATE SERVICE


The following example creates a named Service Broker for a specified task or set of tasks.

```
CREATE SERVICE service_name
  [ AUTHORIZATION owner_name ]
  ON QUEUE [ schema_name. ]queue_name
  [ ( contract_name | [DEFAULT][ ,...n ] ) ]
[ ; ]
```

For more information, see [CREATE SERVICE (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/statements/create-service-transact-sql?view=sql-server-2017) in the *SQL Server documentation*.

### BEGIN DIALOG CONVERSATION


The following example starts the interaction between Service Brokers.

```
BEGIN DIALOG [ CONVERSATION ] @dialog_handle
  FROM SERVICE initiator_service_name
  TO SERVICE 'target_service_name'
    [ , { 'service_broker_guid' | 'CURRENT DATABASE' }]
  [ ON CONTRACT contract_name ]
  [ WITH
  [ { RELATED_CONVERSATION = related_conversation_handle
    | RELATED_CONVERSATION_GROUP = related_conversation_group_id } ]
  [ [ , ] LIFETIME = dialog_lifetime ]
  [ [ , ] ENCRYPTION = { ON | OFF } ] ]
[ ; ]
```

For more information, see [BEGIN DIALOG CONVERSATION (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/statements/begin-dialog-conversation-transact-sql?view=sql-server-2017) in the *SQL Server documentation*.

### WAITFOR(RECEIVE TOP(1))


The following example specifies that a code block has to wait until one message is received.

```
[ WAITFOR ( ]
  RECEIVE [ TOP ( n ) ]
  <column_specifier> [ ,...n ]
  FROM <queue>
  [ INTO table_variable ]
  [ WHERE { conversation_handle = conversation_handle
    | conversation_group_id = conversation_group_id } ]
  [ ) ] [ , TIMEOUT timeout ]
[ ; ]

<column_specifier> ::=
{ *
  | { column_name | [ ] expression } [ [ AS ] column_alias ]
  | column_alias = expression
} [ ,...n ]

<queue> ::=
{
  [ database_name . [ schema_name ] . | schema_name . ]
    queue_name
}
```

For more information, see [RECEIVE (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/statements/receive-transact-sql?view=sql-server-2017) in the *SQL Server documentation*.

You can combine all of the preceding commands to achieve your architecture goals.

For more information, see [Service Broker](https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/sql-server-service-broker?view=sql-server-2017) in the *SQL Server documentation*.

## PostgreSQL Usage


 Amazon Aurora PostgreSQL-Compatible Edition (Aurora PostgreSQL) doesn’t provide a compatible solution to the SQL Server Service Broker. However, you can use DB Links and AWS Lambda to achieve similar functionality.

You can combine AWS Lambda with AWS SQS to reduce costs and remove some loads from the database into the AWS Lambda and Amazon Simple Queue Service (Amazon SQS). This will be much more efficient. For more information, see [Using Lambda with Amazon SQS](https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html).

For example, you can create a table in each database and connect each database with a DB link to read the tables and process the data. For more information, see DB Links.

You can also use AWS Lambda to query a table from the database, process the data, and insert it to another database (even another database type). This approach is the best option for moving workloads out of the database to a less expensive instance type.

For even more decoupling and reducing workloads from the database, you can use Amazon SQS with Lambda.

For more information, see [Database Mail](chap-sql-server-aurora-pg.management.databasemail.md).

# SQL Server cast and convert for T-SQL


This topic provides reference information about data type conversion and casting in Amazon Aurora PostgreSQL compared to Microsoft SQL Server. You can understand the similarities and differences between the CAST and CONVERT functions in both database systems. The topic explains how Aurora PostgreSQL supports the CAST function similarly to SQL Server, while also offering additional flexibility through custom casts and the CREATE CAST command.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-3.png)   |   ![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-4.png)   |  N/A  |  CONVERT is used only to convert between collations. CAST uses different syntax.  | 

## SQL Server Usage


The `CAST` and `CONVERT` functions are commonly used to convert one data type to another. `CAST` and `CONVERT` behave mostly the same and share the same topic in MSDN. They have the following differences:
+  `CAST` is part of the ANSI-SQL specification, but `CONVERT` isn’t.
+  `CONVERT` accepts an optional style parameter used for formatting.

For more information, see [Date and Time styles](https://docs.microsoft.com/en-us/sql/t-sql/functions/cast-and-convert-transact-sql?view=sql-server-2017#date-and-time-styles) in the *SQL Server documentation*.

### Conversion Matrix


For a list of available conversion data types, see [Implicit conversions](https://docs.microsoft.com/en-us/sql/t-sql/functions/cast-and-convert-transact-sql?view=sql-server-2017#implicit-conversions) in the *SQL Server documentation*.

### Syntax


```
-- CAST Syntax:
CAST ( expression AS data_type [ ( length ) ] )

-- CONVERT Syntax:
CONVERT ( data_type [ ( length ) ] , expression [ , style ] )
```

### Examples


The following example casts a `string` to `int` and `int` to `decimal`.

```
SELECT CAST('23' AS int) AS [int], CAST(23 AS decimal(10, 2)) AS [decimal];
```

The following example converts `string` to `int` and `int` to `decimal`.

```
SELECT CONVERT(int, '23') AS [int], CONVERT(decimal(10, 2), 23) AS [decimal];
```

For these two preceding examples, the result looks as shown following.

```
int  decimal
23   23.00
```

The following example converts a date with option style input `(109 - mon dd yyyy hh:mi:ss:mmmAM (or PM))`.

```
SELECT CONVERT(nvarchar(30), GETDATE(), 109);

Jul 25 2018 5:20:10.8975085PM
```

For more information, see [CAST and CONVERT (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/functions/cast-and-convert-transact-sql?view=sql-server-2017) in the *SQL Server documentation*.

## PostgreSQL Usage


 Amazon Aurora PostgreSQL-Compatible Edition (Aurora PostgreSQL) provides the same CAST function as SQL Server for conversion between data types. It also provides a `CONVERSION` function, but it isn’t equivalent to SQL Server `CONVERT`. PostgreSQL `CONVERSION` is used to convert between character set encoding.

 `CREATE A CAST` defines a new cast on how to convert between two data types.

Cast can be `EXPLICITLY` or `IMPLICIT`.

The behavior is similar to SQL Server’s casting, but in PostgreSQL, you can also create your own casts to change the default behavior. For example, checking if a string is a valid credit card number by creating the `CAST` with the `WITHOUT FUNCTION` clause.

 `CREATE CONVERSION` is used to convert between encoding such as UTF8 and LATIN. If `CONVERT` is currently in use in SQL Server code, rewrite it to use `CAST` instead.

**Note**  
Not all SQL Server data types are supported on Aurora PostgreSQL, besides changing the `CAST` or `CONVERT` commands, you might need to also change the source of the target data type. For more information, see [Data Types](chap-sql-server-aurora-pg.sql.datatypes.md).

Another way to convert between data types in PostgreSQL will be to use the `::` characters. This option is useful and can make your PL/pgSQL code look cleaner and simpler, see the following examples.

### Syntax


```
CREATE CAST (source_type AS target_type)
WITH FUNCTION function_name (argument_type [, ...]) [ AS ASSIGNMENT | AS IMPLICIT ]

CREATE CAST (source_type AS target_type)
WITHOUT FUNCTION [ AS ASSIGNMENT | AS IMPLICIT ]

CREATE CAST (source_type AS target_type)
WITH INOUT [ AS ASSIGNMENT | AS IMPLICIT ]
```

### Examples


The following example converts a numeric value to float.

```
SELECT 23 + 2.0;

or

SELECT CAST ( 23 AS numeric ) + 2.0;
```

The following example converts a date with format input ('mon dd yyyy hh:mi:ss:mmmAM (or PM)').

```
SELECT TO_CHAR(NOW(),'Mon DD YYYY HH:MI:SS:MSAM');

Jul 25 2018 5:20:10.8975085PM
```

The following example uses the `::` characters.

```
SELECT '2.35'::DECIMAL + 4.5 AS results;

results
6.85
```

## Summary



| Option | SQL Server |  Aurora PostgreSQL  | 
| --- | --- | --- | 
|  Explicit `CAST`   |   `SELECT CAST('23.7' AS varchar) AS int`   |   `SELECT CAST('23.7' AS varchar) AS int`   | 
|  Explicit `CONVERT`   |   `SELECT CONVERT (VARCHAR, '23.7')`   |  Need to use `CAST`:  | 
|   `SELECT CAST('23.7' AS varchar) AS int`   |  Implicit casting  |   `SELECT 23 + 2.0 SELECT 23 + 2.0`   | 
|  Convert to a specific date format: `'mon dd yyyy hh:mi:ss:mmmAM'`   |   `SELECT CONVERT(nvarchar (30), GETDATE(), 109)`   |   `SELECT TO_CHAR(NOW(),'Mon DD YYYY HH:MI:SS:MSAM')`   | 

For more information, see [CREATE CAST](https://www.postgresql.org/docs/13/sql-createcast.html), [Type Conversion](https://www.postgresql.org/docs/13/typeconv.html), and [CREATE CONVERSION](https://www.postgresql.org/docs/13/sql-createconversion.html) in the *PostgreSQL documentation*.

# Common Language Runtime for T-SQL


This topic provides reference information about migrating Microsoft SQL Server’s Common Language Runtime (CLR) objects to Amazon Aurora PostgreSQL. You can understand the differences in functionality between SQL Server’s CLR capabilities and alternatives. The topic explains that while Aurora PostgreSQL doesn’t support .NET code directly, it offers Perl as an alternative for creating similar database objects.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[One star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-1.png)   |   ![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-0.png)   |  N/A  |  Migrating CLR objects requires a full code rewrite.  | 

## SQL Server Usage


SQL Server provides the capability of implementing .NET objects in the database using the Common Runtime Library (CLR). The CLR enables development of functionality that would be complicated using T-SQL.

The CLR provides robust solutions for string manipulation, date manipulation, and calling external services such as Windows Communication Foundation (WCF) services and web services.

You can create the following objects with the `EXTERNAL NAME` clause:
+ Procedures. For more information, see [CLR Stored Procedures](https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/clr-stored-procedures?redirectedfrom=MSDN) in the *SQL Server documentation*.
+ Functions. For more information, see [Create CLR Functions](https://docs.microsoft.com/en-us/sql/relational-databases/user-defined-functions/create-clr-functions?view=sql-server-2017) in the *SQL Server documentation*.
+ Triggers. For more information, see [Create CLR Triggers](https://docs.microsoft.com/en-us/sql/relational-databases/triggers/create-clr-triggers?view=sql-server-2017) in the *SQL Server documentation*.
+ Types. For more information, see [CLR User-Defined Types](https://docs.microsoft.com/en-us/sql/relational-databases/clr-integration-database-objects-user-defined-types/clr-user-defined-types?view=sql-server-2017) in the *SQL Server documentation*.
+ User-defined aggregate functions. For more information, see [CLR User-Defined Aggregates](https://docs.microsoft.com/en-us/sql/relational-databases/clr-integration-database-objects-user-defined-functions/clr-user-defined-aggregates?view=sql-server-2017) in the *SQL Server documentation*.

## PostgreSQL Usage


 Amazon Aurora PostgreSQL-Compatible Edition (Aurora PostgreSQL) doesn’t support .NET code. However, you can create Perl functions. In this case, convert all C\$1 code to PL/pgSQL or PL/Perl.

To use PL/Perl language, install the Perl extension:

```
CREATE EXTENSION plperl;
```

After you install the Perl extension, you can create functions using Perl code. Specify `plperl` in the `LANGUAGE` clause.

You can create the following objects with Perl:
+ Functions.
+ Void functions or procedures.
+ Triggers.
+ Event Triggers.
+ Values for session level.

### Examples


The following example creates a function that returns the greater value of two integers.

```
CREATE FUNCTION perl_max (integer, integer) RETURNS integer AS $$
  if ($_[0] > $_[1]) { return $_[0]; }
  return $_[1];
$$ LANGUAGE plperl;
```

For more information, see [PL/Perl — Perl Procedural Language](https://www.postgresql.org/docs/13/plperl.html) in the *PostgreSQL documentation*.

# Collations for T-SQL


This topic provides reference information about collations and character sets in Microsoft SQL Server 2019 and Amazon Aurora PostgreSQL, highlighting their differences and similarities. You can gain insight into how these database systems handle string management, sorting rules, and character encoding. The topic explores the various levels at which collations can be defined in SQL Server, from server-level to expression-level, and contrasts this with PostgreSQL’s approach.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-3.png)   |   ![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-0.png)   |   [Collations](chap-sql-server-aurora-pg.tools.actioncode.md#chap-sql-server-aurora-pg.tools.actioncode.collations)   |   `UTF16`, `NCHAR`, and `NVARCHAR` data types aren’t supported.  | 

## SQL Server Usage


SQL Server collations define the rules for string management and storage in terms of sorting, case sensitivity, accent sensitivity, and code page mapping. SQL Server supports both ASCII and UCS-2 UNICODE data.

UCS-2 UNICODE data uses a dedicated set of UNICODE data types denoted by the prefix N: Nchar and Nvarchar. Their ASCII counterparts are `CHAR` and `VARCHAR`.

Choosing a collation and a character set has significant implications on data storage, logical predicate evaluations, query results, and query performance.

To view all collations supported by SQL Server, use the `fn_helpcollations` function:

```
SELECT * FROM sys.fn_helpcollations()
```

Collations define the actual bitwise binary representation of all string characters and the associated sorting rules. SQL Server supports multiple collations down to the column level. A table may have multiple string columns that use different collations. Collations for non-UNICODE character sets determine the code page number representing the string characters.

UNICODE and non-UNICODE data types in SQL Server aren’t compatible. A predicate or data modification that introduces a type conflict is resolved using predefined collation precedence rules. For more information, see [Collation Precedence](https://docs.microsoft.com/en-us/sql/t-sql/statements/collation-precedence-transact-sql?view=sql-server-ver15).

Collations define sorting and matching sensitivity for the following string characteristics:
+ Case
+ Accent
+ Kana
+ Width
+ Variation selector

SQL Server uses a suffix naming convention that appends the option name to the collation name. For example, the collation Azeri\$1Cyrillic\$1100\$1CS\$1AS\$1KS\$1WS\$1SC, is an Azeri-Cyrillic-100 collation that is case-sensitive, accent-sensitive, kana type-sensitive, width-sensitive, and has supplementary characters.

SQL Server supports three types of collation sets:
+  **Windows collations** use the rules defined for collations by the operating system locale where UNICODE and non-UNICODE data use the same comparison algorithms.
+  **Binary collations** use the binary bit-wise code for comparison. Therefore, the locale doesn’t affect sorting.
+  **SQL Server collations** provide backward compatibility with previous SQL Server versions. They aren’t compatible with the windows collation rules for non-UNICODE data.

You can define collations at various levels:
+  **Server-level collations** determine the collations used for all system databases and is the default for future user databases. While the system databases collation can’t be changed, you can specify an alternative collation as part of the `CREATE DATABASE` statement.
+  **Database-level collations** inherit the server default unless the `CREATE DATABASE` statement explicitly sets a different collation. This collation is used as a default for all `CREATE TABLE` and `ALTER TABLE` statements.
+  **Column-level collations** can be specified as part of the `CREATE TABLE` or `ALTER TABLE` statements to override the default collation setting of your database.
+  **Expression-level collations** can be set for individual string expressions using the COLLATE function. For example, `SELECT * FROM MyTable ORDER BY StringColumn COLLATE Latin1_General_CS_AS`.

SQL Server supports UCS-2 UNICODE only.

SQL Server 2019 adds support for UTF-8 for import and export encoding, and as database-level or column-level collation for string data. Support includes PolyBase external tables, and Always Encrypted when not used with Enclaves. For more information, see [Collation and Unicode Support](https://docs.microsoft.com/en-us/sql/relational-databases/collations/collation-and-unicode-support?view=sql-server-ver15).

### Syntax


```
CREATE DATABASE <Database Name>
[ ON <File Specifications> ]
COLLATE <Collation>
[ WITH <Database Option List> ];
```

```
CREATE TABLE <Table Name>
(
<Column Name> <String Data Type>
COLLATE <Collation> [ <Column Constraints> ]...
);
```

### Examples


The following example creates a database with a default Bengali\$1100\$1CS\$1AI collation.

```
CREATE DATABASE MyBengaliDatabase
ON
( NAME = MyBengaliDatabase_Datafile,
  FILENAME = 'C:\Program Files\Microsoft SQL Server-\MSSQL13.MSSQLSERVER\MSSQL\DATA\MyBengaliDatabase.mdf', SIZE = 100)
LOG ON
( NAME = MyBengaliDatabase_Logfile,
FILENAME = 'C:\Program Files\Microsoft SQL Server-\MSSQL13.MSSQLSERVER\MSSQL\DATA\MyBengaliDblog.ldf', SIZE = 25)
COLLATE Bengali_100_CS_AI;
```

The following example creates a table with two different collations.

```
CREATE TABLE MyTable
(
Col1 CHAR(10) COLLATE Hungarian_100_CI_AI_SC NOT NULL PRIMARY KEY,
COL2 VARCHAR(100) COLLATE Sami_Sweden_Finland_100_CS_AS_KS NOT NULL
);
```

For more information, see [Collation and Unicode support](https://docs.microsoft.com/en-us/sql/relational-databases/collations/collation-and-unicode-support?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


PostgreSQL supports a variety of different character sets, also known as encoding, including support for both single-byte and multi-byte languages. The default character set is specified when initializing a PostgreSQL database cluster with `initdb`. Each individual database created on the PostgreSQL cluster supports individual character sets defined as part of database creation.

**Note**  
For Amazon Relational Database Service (Amazon RDS), starting with PostgreSQL 13, the Windows version now supports obtaining version information for collations or ordering rules from the operating system.

When you query the collation in PostgreSQL running on Windows, prior to version 13 there wasn’t any value to reflect the OS collation version. For example, for PostgreSQL version 11 running on Windows, the result is shown following:

```
CREATE COLLATION german (provider = libc, locale = 'de_DE');

CREATE COLLATION

select oid,collname,collversion from pg_collation
where collprovider='c' and collname='german';

oid    collname  collversion
16394  german
(1 row)

select pg_collation_actual_version (16394);

pg_collation_actual_version
(1 row)
```

For PostgreSQL version 13 running on Windows, the result is shown following:

```
CREATE COLLATION german (provider = libc, locale = 'de_DE');

CREATE COLLATION

select oid,collname,collversion from pg_collation
where collprovider='c' and collname='german';

oid    collname  collversion
32769  german    1539.5,1539.5
(1 row)

select pg_collation_actual_version (32769);

pg_collation_actual_version
1539.5,1539.5
(1 row)
```

Clients can use all supported character sets. However, some client-side only characters aren’t supported for use within the server.

Unlike SQL Server, PostgreSQL doesn’t natively support an NVARHCHAR data type and doesn’t provide support for UTF-16.


| Type | Function | Implementation level | 
| --- | --- | --- | 
|  Encoding  |  Defines the basic rules on how alphanumeric characters are represented in binary format. For example, Unicode encoding.  |  Database  | 
|  Locale  |  A superset that includes `LC_COLLATE` and `LC_CTYPE` among others. For example, `LC_COLLATE` defines how strings are sorted and must be a subset supported by the database encoding.  |  Table-Column  | 

### Examples


The following example creates a database named test01 which uses the Korean EUC\$1KR Encoding the and the `ko_KR` locale.

```
CREATE DATABASE test01 WITH ENCODING 'EUC_KR' LC_COLLATE='ko_KR.euckr' LC_CTYPE='ko_KR.euckr' TEMPLATE=template0;
```

The following example shows how to view the character sets configured for each database by querying the system catalog.

```
select datname, datcollate, datctype from pg_database;
```

### Changing Character Sets or Encoding


In-place modification of the database encoding isn’t recommended nor supported. Instead, export all data, create a new database with the new encoding, and import the data.

Export the data using the `pg_dump` utility.

```
pg_dump mydb1 > mydb1_export.sql
```

Rename or delete a database.

```
ALTER DATABASE mydb1 TO mydb1_backup;
```

Create a new database using the modified encoding.

```
CREATE DATABASE mydb1_new_encoding WITH ENCODING 'UNICODE' TEMPLATE=template0;
```

Import data using the `pg_dump` file previously created. Verify that you set your client encoding to the encoding of your old database.

```
PGCLIENTENCODING=OLD_DB_ENCODING psql -f mydb1_export.sql mydb1_new_encoding
```

The `client_encoding` parameter overrides the use of `PGCLIENTENCODING`.

### Client-Server Character Set Conversions


PostgreSQL supports conversion of character sets between servers and clients for specific character set combinations as described in the `pg_conversion` system catalog.

PostgreSQL includes predefined conversions. For more information, see [Available Character Set Conversions](https://www.postgresql.org/docs/13/static/multibyte.html#MULTIBYTE-TRANSLATION-TABLE) in the *PostgreSQL documentation*.

You can create a new conversion using the SQL command `CREATE CONVERSION`.

### Examples


The following example creates a conversion from UTF8 to LATIN1 using the custom `myfunc1` function.

```
CREATE CONVERSION myconv FOR 'UTF8' TO 'LATIN1' FROM myfunc1;
```

The following example configures the PostgreSQL client character set.

```
Method 1
========
psql \encoding SJIS

Method 2
========
SET CLIENT_ENCODING TO 'value';
```

View the client character set and reset it back to the default value.

```
SHOW client_encoding;

RESET client_encoding;
```

### Table Level Collation


PostgreSQL supports specifying the sort order and character classification behavior on a per-column level.

### Example


Specify specific collations for individual table columns.

```
CREATE TABLE test1 (col1 text COLLATE "de_DE", col2 text COLLATE "es_ES");
```

## Summary



| Feature | SQL Server |  Aurora PostgreSQL  | 
| --- | --- | --- | 
|  View database character set  |   `SELECT collation_name FROM sys.databases;`   |   `select datname, pg_encoding_to_char(encoding), datcollate, datctype from pg_database;`   | 
|  Modify the database character set  |   `RECRATE` the database  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/chap-sql-server-aurora-pg.tsql.collations.html)  | 
|  Character set granularity  |  Database  |  Database  | 
|  UTF8  |  Supported  |  Supported  | 
|  UTF16  |  Supported  |  Not Supported  | 
|   `NCHAR` or `NVARCHAR` data types  |  Supported  |  Not Supported  | 

For more information, see [Character Set Support](https://www.postgresql.org/docs/13/multibyte.html) in the *PostgreSQL documentation*.

# Cursors for T-SQL


This topic provides reference information about cursor compatibility between Microsoft SQL Server 2019 and Amazon Aurora PostgreSQL. It introduces the concept of cursors and their role in database operations, explaining how they allow developers to work with result sets sequentially. The topic compares cursor functionality in SQL Server and PostgreSQL, highlighting similarities and differences in syntax and usage.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-3.png)   |   ![\[Three star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-3.png)   |   [Cursors](chap-sql-server-aurora-pg.tools.actioncode.md#chap-sql-server-aurora-pg.tools.actioncode.cursors)   |  Different cursor options.  | 

## SQL Server Usage


A *set* is a fundamental concept of the relation data model from which SQL is derived. SQL is a declarative language that operates on whole sets, unlike most procedural languages that operate on individual data elements. A single invocations of an SQL statements can return a whole set or modify millions of rows.

Many developers are accustomed to using procedural or imperative approaches to develop solutions that are difficult to implement using set-based querying techniques. Also, operating on row data sequentially may be a more appropriate approach in certain situations.

Cursors provide an alternative mechanism for operating on result sets. Instead of receiving a table object containing rows of data, applications can use cursors to access the data sequentially, row-by-row. Cursors provide the following capabilities:
+ Positioning the cursor at specific rows of the result set using absolute or relative offsets.
+ Retrieving a row, or a block of rows, from the current cursor position.
+ Modifying data at the current cursor position.
+ Isolating data modifications by concurrent transactions that affect the cursor’s result.
+ T-SQL statements can use cursors in scripts, stored procedures, and triggers.

### Syntax


```
DECLARE <Cursor Name>
CURSOR [LOCAL | GLOBAL]
  [FORWARD_ONLY | SCROLL]
  [STATIC | KEYSET | DYNAMIC | FAST_FORWARD]
  [ READ_ONLY | SCROLL_LOCKS | OPTIMISTIC]
  [TYPE_WARNING]
  FOR <SELECT statement>
  [ FOR UPDATE [ OF <Column List>]][;]
```

```
FETCH [NEXT | PRIOR | FIRST | LAST | ABSOLUTE <Value> | RELATIVE <Value>]
FROM <Cursor Name> INTO <Variable List>;
```

### Examples


Process data in a cursor.

```
DECLARE MyCursor CURSOR FOR
  SELECT *
  FROM Table1 AS T1
    INNER JOIN
    Table2 AS T2
    ON T1.Col1 = T2.Col1;
  OPEN MyCursor;
  DECLARE @VarCursor1 VARCHAR(20);
  FETCH NEXT
    FROM MyCursor INTO @VarCursor1;
  WHILE @@FETCH_STATUS = 0
  BEGIN
    EXEC MyPRocessingProcedure
      @InputParameter = @VarCursor1;
    FETCH NEXT
    FROM product_cursor INTO @VarCursor1;
END

CLOSE MyCursor;
DEALLOCATE MyCursor ;
```

For more information, see [SQL Server Cursors](https://docs.microsoft.com/en-us/sql/relational-databases/cursors?view=sql-server-ver15) and [Cursors (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/language-elements/cursors-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


Similar to T-SQL Cursors in SQL Server, PostgreSQL has PL/pgSQL cursors that you can use to iterate business logic on rows read from the database. They can encapsulate the query and read the query results a few rows at a time. All access to cursors in PL/pgSQL is performed through cursor variables, which are always of the `refcursor` data type.

### Examples


 **Declare a Cursor** 

The following table includes the `DECLARE..CURSOR` options that are Transact-SQL extended syntax have no equivalent in PostgreSQL.


| SQL Server option | Use | Comments | 
| --- | --- | --- | 
|   `FORWARD_ONLY`   |  Defining that `FETCH NEXT` is the only supported fetching option.  |  Using `FOR LOOP` might be a relevant solution for this option.  | 
|   `STATIC`   |  Cursor will make a temporary copy of the data.  |  For small data sets temporary tables can be created and declare a cursor that will select these tables.  | 
|   `KEYSET`   |  Determining that membership and order of rows in the cursor are fixed.  |  N/A  | 
|   `DYNAMIC`   |  Cursor will reflect all data changes made on the selected rows.  |  Default for PostgreSQL.  | 
|   `FAST_FORWARD`   |  Will use `FORWARD_ONLY` and `READ_ONLY` to optimize performance.  |  N/A  | 
|   `SCROLL_LOCKS`   |  Determine that positioned updates or deletes made by the cursor are guaranteed to succeed.  |  N/A  | 
|   `OPTIMISTIC`   |  Determine that positioned updates or deletes made by the cursor will not succeed if the rows has been updated.  |  N/A  | 
|   `TYPE_WARNING`   |  Will send warning messages to the client if the cursor is implicitly converted from the requested type.  |  N/A  | 

Declare a Cursor in PL/pgSQL to be used with any query. The variable c1 is unbounded because it isn’t bound to any particular query.

```
DECLARE c1 refcursor;
```

Declare a Cursor in PL/pgSQL with a bounded query.

```
DECLARE c2 CURSOR FOR SELECT * FROM employees;
```

Declare a Cursor with a parametrized bound query:
+ The id variable is replaced by an integer parameter value when the cursor is opened.
+ When declaring a Cursor with SCROLL specified, the Cursor can scroll backwards.
+ If `NO SCROLL` is specified, backward fetches are rejected.

```
DECLARE c3 CURSOR (var1 integer) FOR SELECT * FROM employees where id = var1;
```

Declare a backward-scrolling compatible Cursor using the `SCROLL` option.
+  `SCROLL` specifies that rows can be retrieved backwards. `NO SCROLL` specifies that rows can’t be retrieved backwards.
+ Depending upon the complexity of the run plan for the query, `SCROLL` might create performance issues.
+ Backward fetches aren’t allowed when the query includes `FOR UPDATE` or `FOR SHARE`.

```
DECLARE c3 SCROLL CURSOR FOR SELECT id, name FROM employees;
```

 **Open a Cursor** 

The `OPEN` command is fully compatible between SQL Server and PostgreSQL.

Open a cursor variable that was declared as unbound and specify the query to run.

```
OPEN c1 FOR SELECT * FROM employees WHERE id = emp_id;
```

Open a Cursor variable that was declared as Unbound and specify the query to run as a string expression. This approach provides greater flexibility.

```
OPEN c1 FOR EXECUTE format('SELECT * FROM %I WHERE col1 = $1',tabname) USING keyvalue;
```

You can insert parameter values into the dynamic command with `format()` and `USING`. For example, the table name is inserted into the query with `format()`. The comparison value for `col1` is inserted with a `USING` parameter.

Open a Cursor that was bound to a query when the cursor was declared and was declared to take arguments.

```
DO $$
DECLARE
  c3 CURSOR (var1 integer) FOR SELECT * FROM employees where id = var1;
BEGIN
  OPEN c3(var1 := 42);
END$$;
```

For the `c3` cursor, supply the argument value expressions.

If the cursor wasn’t declared to take arguments, you can specify the arguments outside the cursor.

```
DO $$
DECLARE
  var1 integer;
  c3 CURSOR FOR SELECT * FROM employees where id = var1;
BEGIN
  var1 := 1;
  OPEN c3;
END$$;
```

 **Fetch a Cursor** 

Use the following syntax to fetch a cursor.

```
FETCH [ direction [ FROM | IN ] ] cursor_name
```

The following table shows additional PostgreSQL options as a direction for the FETCH command.


| PostgreSQL option | Use | 
| --- | --- | 
|  ALL  |  Get all remaining rows  | 
|  FORWARD  |  Same as NEXT  | 
|  FORWARD  |  (n) Fetch the next n rows  | 
|  FORWARD  |  ALL Same as ALL  | 
|  BACKWARD  |  Same as PRIOR  | 
|  BACKWARD  |  (n) Fetch the prior n rows  | 
|  BACKWARD  |  ALL Fetch all prior rows  | 

The PL/pgSQL `FETCH` command retrieves the next row from the cursor into a variable.

Fetch the values returned from the `c3` cursor into a row variable.

```
DO $$
DECLARE
  c3 CURSOR FOR SELECT * FROM employees;
  rowvar employees%ROWTYPE;
BEGIN
  OPEN c3;
  FETCH c3 INTO rowvar;
END$$;
```

Fetch the values returned from the c3 Cursor into two scalar data types.

```
DO $$
DECLARE
  c3 CURSOR FOR SELECT id, name FROM employees;
  emp_id integer;
  emp_name varchar;
BEGIN
  OPEN c3;
  FETCH FROM c3 INTO emp_id, emp_name;
END$$;
```

PL/pgSQL supports a special direction clause when fetching data from a cursor using the `NEXT`, `PRIOR`, `FIRST`, `LAST`, `ABSOLUTE count`, `RELATIVE count`, `FORWARD`, or `BACKWARD` arguments. Omitting direction is equivalent to specifying `NEXT`. For example, fetch the last row from the cursor into the declared variables.

```
DO $$
DECLARE
  c3 CURSOR FOR SELECT id, name FROM employees;
  emp_id integer;
  emp_name varchar;
BEGIN
  OPEN c3;
  FETCH LAST FROM c3 INTO emp_id, emp_name;
END$$;
```

## Summary



| Feature | SQL Server |  Aurora PostgreSQL  | 
| --- | --- | --- | 
|  Cursor options  |  <pre>[FORWARD_ONLY | SCROLL] [STATIC | KEYSET | DYNAMIC | FAST_FORWARD]<br />[READ_ONLY | SCROLL_LOCKS | OPTIMISTIC]</pre>  |  <pre>[ BINARY ] [ INSENSITIVE ] [ [ NO ]<br />SCROLL ] CURSOR [ { WITH | WITHOUT } HOLD ]</pre>  | 
|  Updateable cursors  |  <pre>DECLARE CURSOR... FOR UPDATE</pre>  |  <pre>DECLARE cur_name CURSOR... FOR UPDATE</pre>  | 
|  Cursor declaration  |  <pre>DECLARE CURSOR</pre>  |  <pre>DECLARE cur_name CURSOR</pre>  | 
|  Cursor open  |  <pre>OPEN</pre>  |  <pre>OPEN</pre>  | 
|  Cursor fetch  |  <pre>FETCH NEXT | PRIOR | FIRST | LAST | ABSOLUTE | RELATIVE</pre>  |  <pre>FETCH [ direction [ FROM | IN ] ] cursor_name</pre> The direction can be empty or one of the following: `NEXT`, `PRIOR`, `FIRST`, `LAST`, `ABSOLUTE count`, `RELATIVE count`, `count`, `ALL FORWARD`, `FORWARD count`, `FORWARD ALL`, `BACKWARD`, `BACKWARD count`, `BACKWARD ALL`.  | 
|  Cursor close  |  <pre>CLOSE</pre>  |  <pre>CLOSE</pre>  | 
|  Cursor deallocate  |  <pre>DEALLOCATE</pre>  |  Same effect as CLOSE (not required)  | 
|  Cursor end condition  |   `@@FETCH_STATUS` system variable  |  Not supported  | 

For more information, see [FETCH](https://www.postgresql.org/docs/13/sql-fetch.html) in the *PostgreSQL documentation*.

# Date and time functions for T-SQL


This topic provides reference information about date and time functions in PostgreSQL compared to Microsoft SQL Server, which is valuable for database administrators and developers migrating from SQL Server to Amazon Aurora PostgreSQL. You can understand the differences in function names, syntax, and behavior between the two database systems when working with temporal data. The topic highlights key date and time functions, their equivalents across platforms, and offers guidance on handling potential compatibility issues.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Four star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-4.png)   |   ![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-4.png)   |   [Data Types](chap-sql-server-aurora-pg.tools.actioncode.md#chap-sql-server-aurora-pg.tools.actioncode.types)   |  PostgreSQL is using different function names.  | 

## SQL Server Usage


Date and Time Functions are scalar functions that perform operations on temporal or numeric input and return temporal or numeric values.

System date and time values are derived from the operating system of the server on which SQL Server is running.

This section doesn’t address time zone considerations and time zone aware functions. For more information about time zone handling, see [Data Types](chap-sql-server-aurora-pg.sql.datatypes.md).

### Syntax and Examples


The following table includes the most commonly used date and time functions.


| Function | Purpose | Example | Result | Comments | 
| --- | --- | --- | --- | --- | 
|   `GETDATE` and `GETUTCDATE`   |  Return a datetime value that contains the current local or UTC date and time.  |   `SELECT GETDATE()`   |  2018-04-05 15:53:01.380  |  | 
|   `DATEPART`, `DAY`, `MONTH`, and `YEAR`   |  Return an integer value representing the specified `DATEPART` of a specified date.  |   `SELECT MONTH(GETDATE()), YEAR(GETDATE())`   |  4, 2018  |  | 
|   `DATEDIFF`   |  Returns an integer value of `DATEPART` boundaries that are crossed between two dates.  |   `SELECT DATEDIFF(DAY, GETDATE(), EOMONTH(GETDATE()))`   |  25  |  How many days left until end of the month.  | 
|   `DATEADD`   |  Returns a datetime value that is calculated with an offset interval to the specified `DATEPART` of a date.  |   `SELECT DATEADD(DAY, 25, GETDATE())`   |  2018-04-30 15:55:52.147  |  | 
|   `CAST` and `CONVERT`   |  Converts datetime values to and from string literals and to and from other datetime formats.  |   `SELECT CAST (GETDATE() AS DATE)` `SELECT CONVERT (VARCHAR(20), GETDATE(), 112)`   |  2018-04-05 20180405  |  Default date format. Style 112 (ISO) with no separtors.  | 

For more information, see [Date and Time functions](https://docs.microsoft.com/en-us/sql/t-sql/functions/date-and-time-data-types-and-functions-transact-sql?view=sql-server-ver15#DateandTimeFunctions) in the *SQL Server documentation*.

## PostgreSQL Usage


 Amazon Aurora PostgreSQL-Compatible Edition (Aurora PostgreSQL) provides a very rich set of scalar date and time functions; more than SQL Server.

While some of the functions appear to be similar to those in SQL Server, the functionality is significantly different. Take extra care when migrating temporal logic to Aurora PostgreSQL paradigms.

### Functions and Definition



| PostgreSQL function | Function definition | 
| --- | --- | 
|   `AGE`   |  Subtract from `current_date`.  | 
|   `CLOCK_TIMESTAMP`   |  Current date and time.  | 
|   `CURRENT_DATE`   |  Current date.  | 
|   `CURRENT_TIME`   |  Current time of day.  | 
|   `CURRENT_TIMESTAMP`   |  Current date and time (start of current transaction).  | 
|   `DATE_PART`   |  Get subfield (equivalent to extract).  | 
|   `DATE_TRUNC`   |  Truncate to specified precision.  | 
|   `EXTRACT`   |  Get subfield.  | 
|   `ISFINITE`   |  Test for finite interval.  | 
|   `JUSTIFY_DAYS`   |  Adjust interval so 30-day time periods are represented as months.  | 
|   `JUSTIFY_HOURS`   |  Adjust interval so 24-hour time periods are represented as days.  | 
|   `JUSTIFY_INTERVAL`   |  Adjust interval using `justify_days` and `justify_hours`, with additional sign adjustments.  | 
|   `LOCALTIME`   |  Current time of day.  | 
|   `MAKE_DATE`   |  Create date from year, month and day fields.  | 
|   `MAKE_INTERVAL`   |  Create interval from years, months, weeks, days, hours, minutes and seconds fields.  | 
|   `MAKE_TIME`   |  Create time from hour, minute and seconds fields.  | 
|   `MAKE_TIMESTAMP`   |  Create timestamp from year, month, day, hour, minute, and seconds fields.  | 
|   `MAKE_TIMESTAMPTZ`   |  Create timestamp with time zone from year, month, day, hour, minute, and seconds fields. If the time zone isn’t specified, the current time zone is used.  | 
|  NOW  |  Current date and time.  | 
|   `STATEMENT_TIMESTAMP`   |  Current date and time.  | 
|   `TIMEOFDAY`   |  Current date and time (like clock\$1timestamp, but as a text string).  | 
|   `TRANSACTION_TIMESTAMP`   |  Current date and time.  | 
|   `TO_TIMESTAMP`   |  Convert Unix epoch (seconds since 1970-01-01 00:00:00\$100) to timestamp.  | 

## Summary



| SQL Server function |  Aurora PostgreSQL function | 
| --- | --- | 
|   `GETDATE`, `CURRENT_TIMESTAMP`   |   `NOW`, `CURRENT_DATE`, `CURRENT_TIME`, `CURRENT_TIMESTAMP`   | 
|   `GETUTCDATE`   |   `current_timestamp at time zone 'utc'`   | 
|   `DAY`, `MONTH`, and `YEAR`   |   `EXTRACT(DAY/MONTH/YEAR FROM TIMESTAMP timestamp_value)`   | 
|   `DATEPART`   |   `EXTRACT`, `DATE_PART`   | 
|   `DATEDIFF`   |   `DATE_PART`   | 
|   `DATEADD`   |   `+ INTERVAL 'X days/months/years'`   | 
|   `CAST` and `CONVERT`   |   `CAST`   | 

For more information, see [Date/Time Functions and Operators](https://www.postgresql.org/docs/13/functions-datetime.html) in the *PostgreSQL documentation*.

# String functions for T-SQL


Compare string function compatibility between Microsoft SQL Server 2019 and Amazon Aurora PostgreSQL. Gain insights into how various string functions in SQL Server map to their PostgreSQL equivalents, which is crucial for database migration projects. The information highlights supported functions, unsupported ones, and alternative approaches in PostgreSQL.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Four star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-4.png)   |   ![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-4.png)   |  N/A  |  Syntax and option differences.  | 

## SQL Server Usage


String functions are typically scalar functions that perform an operation on string input and return a string or a numeric value.

### Syntax and Examples


The following table includes the most commonly used string functions.


| Function | Purpose | Example | Result | Comments | 
| --- | --- | --- | --- | --- | 
|   `ASCII` and `UNICODE`   |  Convert an ASCII or UNICODE character to its ASCII or UNICODE code.  |   `SELECT ASCII ('A')`   |  65  |  Returns a numeric integer value.  | 
|   `CHAR` and `NCHAR`   |  Convert between ASCII or UNICODE code to a string character.  |   `SELECT CHAR(65)`   |  'A'  |  Numeric integer value as input.  | 
|   `CHARINDEX` and `PATINDEX`   |  Find the starting position of one string expression or string pattern within another string expression.  |   `SELECT CHARINDEX('ab','xabcdy')`   |  2  |  Returns a numeric integer value.  | 
|   `CONCAT` and `CONCAT_WS`   |  Combine multiple string input expressions into a single string with, or without, a separator character (WS).  |   `SELECT CONCAT ('a','b'), CONCAT_WS(',','a','b')`   |  'ab', 'a,b'  |  | 
|   `LEFT`, `RIGHT`, and `SUBSTRING`   |  Return a partial string from another string expression based on position and length.  |   `SELECT LEFT ('abs',2),SUBSTRING ('abcd',2,2)`   |  'ab', 'bc'  |  | 
|   `LOWER` and `UPPER`   |  Return a string with all characters in lower or upper case. Use for presentation or to handle case insensitive expressions.  |   `SELECT LOWER('ABcd')`   |  'abcd'  |  | 
|   `LTRIM`, `RTRIM`, and `TRIM`   |  Remove leading and trailing spaces.  |   `SELECT LTRIM ('abc d ')`   |  'abc d '  |  | 
|   `STR`   |  Convert a numeric value to a string.  |   `SELECT STR(3.1415927,5,3)`   |  3.142  |  Numeric expressions as input.  | 
|   `REVERSE`   |  Return a string in reverse order.  |   `SELECT REVERSE('abcd')`   |  'dcba'  |  | 
|   `REPLICATE`   |  Return a string that consists of zero or more concatenated copies of another string expression.  |   `SELECT REPLICATE ('abc', 3)`   |  'abcabcabc'  |  | 
|   `REPLACE`   |  Replace all occurrences of a string expression with another.  |   `SELECT REPLACE('abcd', 'bc', 'xy')`   |  'axyd'  |  | 
|   `STRING_SPLIT`   |  Parse a list of values with a separator and return a set of all individual elements.  |   `SELECT * FROM STRING_SPLIT('1,2',',') AS X©`   |  12  |   `STRING_SPLIT` is a table-valued function.  | 
|   `STRING_AGG`   |  Return a string that consists of concatenated string values in row groups.  |   `SELECT STRING_AGG(C, ',') FROM VALUES(1, 'a'), (1, 'b'), (2,'c') AS X (ID,C) GROUP BY I`   |  1 'ab'  |  2 'c'  | 

For more information, see [String Functions (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/functions/string-functions-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


Most of SQL Server string functions are supported in PostgreSQL, there are few which aren’t:
+  `UNICODE` returns the integer value of the first character as defined by the Unicode standard. If you will use UTF8 input, ASCII can be used to get the same results.
+  `PATINDEX` returns the starting position of the first occurrence of a pattern in a specified expression, or zeros if the pattern isn’t found, there is no equivalent function for that but you can create the same function with the same name so it will be fully compatible.

Some of the functions aren’t supported but they have an equivalent function in PostgreSQL that you can use to get the same functionality.

Some of the functions such as regular expressions don’t exist in SQL Server and may be useful for your application.

### Syntax and Examples


The following table includes the most commonly used string functions.


| PostgreSQL function | Function definition | 
| --- | --- | 
|   `CONCAT`   |  Concatenate the text representations of all the arguments: `concat('a', 1)` → a1. Also, can use the (\$1\$1) operators: `select 'a' \|\|' '\|\| 'b'` → a b.  | 
|   `LOWER` or `UPPER`   |  Returns char, with all letters lowercase or uppercase: `lower ('MR. Smith')` → mr. smith.  | 
|   `LPAD` or `RPAD`   |  Returns `expr1`, left or right padded to length n characters with the sequence of characters in `expr2`: `LPAD('Log-1',10,'@')` → @@@@@Log-1.  | 
|   `REGEXP_REPLACE`   |  Replace substrings matching a POSIX regular expression: `regexp_replace('John', '[hn].', '1')` → Jo1.  | 
|   `REGEXP_MATCHES` or `SUBSTRING`   |  Return all captured substrings resulting from matching a POSIX regular expression against the string: <pre>REGEXP_MATCHES ('http://www.aws.com/products', '(http://[[: alnum:]]+.*/)')</pre> The result is `{http://www.aws.com/}`. You can use the following example <pre>SUBSTRING ('http://www.aws.com/products', '(http://[[: alnum:]]+.*/)')</pre> The result is `http://www.aws.com/`.  | 
|   `REPLACE`   |  Returns char with every occurrence of search string replaced with a replacement string: `replace ('abcdef', 'abc', '123')` → 123def.  | 
|   `LTRIM` or `RTRIM`   |  Remove the longest string containing only characters from characters (a space by default) from the start of string: `ltrim('zzzyaws', 'xyz')` → aws.  | 
|   `SUBSTRING`   |  Extract substring: `substring ( 'John Smith', 6 ,1)` → S.  | 
|   `TRIM`   |  Remove the longest string containing only characters from characters (a space by default) from the start, end, or both ends: `trim (both from 'yxJohnxx', 'xyz')` → John.  | 
|   `ASCII`   |  Returns the decimal representation in the database character set of the first character of char: `ascii('a')` → 97.  | 
|   `LENGTH`   |  Return the length of char: `length ('John S.')` → 7.  | 

To create the `PATINDEX` function, use the following code snippet. Note the 0 means that the expression doesn’t exist so the first position will be 1.

```
CREATE OR REPLACE FUNCTION "patindex"( "pattern" VARCHAR, "expression" VARCHAR )
RETURNS INT AS $BODY$
SELECT COALESCE(STRPOS($2,(
  SELECT(REGEXP_MATCHES($2,'(' ||
  REPLACE( REPLACE(TRIM( $1, '%' ), '%', '.*?' ), '_', '.' )
    || ')','i') )[ 1 ] LIMIT 1)),0);
$BODY$ LANGUAGE 'sql' IMMUTABLE;

SELECT patindex( 'Lo%', 'Long String' );

patindex
1

SELECT patindex( '%rin%', 'Long String' );
patindex
8

SELECT patindex( '%g_S%', 'Long String' );
patindex
4
```

## Summary



| SQL Server function |  Aurora PostgreSQL function | 
| --- | --- | 
|   `ASCII`   |   `ASCII`   | 
|   `UNICODE`   |  For UTF8 inputs, you can use only `ASCII`.  | 
|   `CHAR` and `NCHAR`   |   `CHR`   | 
|   `CHARINDEX`   |   `POSITION`   | 
|   `PATINDEX`   |  See examples  | 
|   `CONCAT` and `CONCAT_WS`   |   `CONCAT` and `CONCAT_WS`   | 
|   `LEFT`, `RIGHT`, and `SUBSTRING`   |   `LEFT`, `RIGHT`, and `SUBSTRING`   | 
|   `LOWER` and `UPPER`   |   `LOWER` and `UPPER`   | 
|   `LTRIM`, `RTRIM` and `TRIM`   |   `LTRIM`, `RTRIM` and `TRIM`   | 
|   `STR`   |   `TO_CHAR`   | 
|   `REVERSE`   |   `REVERSE`   | 
|   `REPLICATE`   |   `LPAD`   | 
|   `REPLACE`   |   `REPLACE`   | 
|   `STRING_SPLIT`   |   `regexp_split_to_array` or `regexp_split_to_table`   | 
|   `STRING_AGG`   |   `STRING_AGG`   | 

For more information, see [String Functions and Operators](https://www.postgresql.org/docs/13/functions-string.html) in the *PostgreSQL documentation*.

# Databases and schemas for T-SQL


This topic provides reference information comparing database and schema structures between Microsoft SQL Server 2019 and Amazon Aurora PostgreSQL. You can gain insights into how these database management systems handle logical containers for security and access control. The topic explores the similarities and differences in how databases, schemas, and objects are organized and referenced in both systems.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Five star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-5.png)   |   ![\[Five star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-5.png)   |  N/A  |  N/A  | 

## SQL Server Usage


Databases and schemas are logical containers for security and access control. Administrators can grant permissions collectively at both the databases and the schema levels. SQL Server instances provide security at three levels: individual objects, schemas (collections of objects), and databases (collections of schemas). For more information, see [Data Control Language](chap-sql-server-aurora-pg.security.datacontrollanguage.md).

**Note**  
In previous versions of SQL server, the term user was interchangeable with the term schema. For backward compatibility, each database has several built-in security schemas including `guest`, `dbo`, `db_datareaded`, `sys`, `INFORMATION_SCHEMA`, and others. Most likely, you don’t need to migrate these schemas.

Each SQL Server instance can host and manage a collection of databases, which consists of SQL Server processes and the Master, Model, TempDB, and MSDB system databases.

The most common SQL Server administrator tasks at the database level are:
+ Managing physical files: add, remove, change file growth settings, and re-size files.
+ Managing filegroups: partition schemes, object distribution, and read-only protection of tables.
+ Managing default options.
+ Creating database snapshots.

Unique object identifiers within an instance use three-part identifiers: <Database name>.<Schema name>.<Objectname>.

The recommended way to view database object meta data, including schemas, is to use the ANSI standard information schema views. In most cases, these views are compatible with other ANSI-compliant Relational Database Management Systems (RDBMS).

To view a list of all databases on the server, use the sys.databases table.

### Syntax


Simplified syntax for `CREATE DATABASE`.

```
CREATE DATABASE <database name>
[ ON [ PRIMARY ] <file specifications>[,<filegroup>]
[ LOG ON <file specifications>
[ WITH <options specification> ] ;
```

Simplified syntax for `CREATE SCHEMA`.

```
CREATE SCHEMA <schema name> | AUTHORIZATION <owner name>;
```

### Examples


The following example adds a file to a database and creates a table using the new file.

```
USE master;
```

```
ALTER DATABASE NewDB
ADD FILEGROUP NewGroup;
```

```
ALTER DATABASE NewDB
ADD FILE (
  NAME = 'NewFile',
  FILENAME = 'D:\NewFile.ndf',
  SIZE = 2 MB
)
TO FILEGROUP NewGroup;
```

```
USE NewDB;
CREATE TABLE NewTable
(
  Col1 INT PRIMARY KEY
)
ON NewGroup;
```

```
SELECT Name
FROM sys.databases
WHERE database_id > 4;
```

The following example creates a table within a new schema and database.

```
USE master
CREATE DATABASE NewDB;

USE NewDB;
CREATE SCHEMA NewSchema;

CREATE TABLE NewSchema.NewTable
(
  NewColumn VARCHAR(20) NOT NULL PRIMARY KEY
);
```

This example uses default settings for the new database and schema.

For more information, see [sys.databases (Transact-SQL)](https://docs.microsoft.com/en-us/sql/relational-databases/system-catalog-views/sys-databases-transact-sql?view=sql-server-ver15), [CREATE SCHEMA (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/statements/create-schema-transact-sql?view=sql-server-ver15), and [CREATE DATABASE](https://docs.microsoft.com/en-us/sql/t-sql/statements/create-database-transact-sql?view=sql-server-ver15&tabs=sqlpool) in the *SQL Server documentation*.

## PostgreSQL Usage


 Amazon Aurora PostgreSQL-Compatible Edition (Aurora PostgreSQL) supports both the `CREATE SCHEMA` and `CREATE DATABASE` statements.

As with SQL Server, Aurora PostgreSQL does have the concept of an instance hosting multiple databases, which in turn contain multiple schemas. Objects in Aurora PostgreSQL are referenced as a three-part name: `<database>.<schema>.<object>`.

A schema is essentially a namespace that contains named objects.

When database is created, it is cloned from a template.

### Syntax


Syntax for `CREATE DATABASE`.

```
CREATE DATABASE name
  [ [ WITH ] [ OWNER [=] user_name ]
    [ TEMPLATE [=] template ]
    [ ENCODING [=] encoding ]
    [ LC_COLLATE [=] lc_collate ]
    [ LC_CTYPE [=] lc_ctype ]
    [ TABLESPACE [=] tablespace_name ]
    [ ALLOW_CONNECTIONS [=] allowconn ]
    [ CONNECTION LIMIT [=] connlimit ]
    [ IS_TEMPLATE [=] istemplate ] ]
```

Syntax for `CREATE SCHEMA`.

```
CREATE SCHEMA schema_name [ AUTHORIZATION role_specification ] [ schema_element [ ... ] ]
CREATE SCHEMA AUTHORIZATION role_specification [ schema_element [ ... ] ]
CREATE SCHEMA IF NOT EXISTS schema_name [ AUTHORIZATION role_specification ]
CREATE SCHEMA IF NOT EXISTS AUTHORIZATION role_specification

where role_specification can be:
user_name | CURRENT_USER | SESSION_USER
```

### Migration Considerations


Unlike SQL Server, Aurora PostgreSQL doesn’t support the `USE` command to specify the default database or schema for missing object qualifiers. To use a different database, use a new connection, obtain the required permissions, and refer to the object using the database name.

For applications using a single database and multiple schemas, the migration path is the same and requires fewer rewrites because two-part names are already being used.

Query the `postgres.pg_catalog.pg_database` table to view databases in Aurora PostgreSQL.

```
SELECT datname, datcollate, datistemplate, datallowconn
FROM postgres.pg_catalog.pg_database;

datname    datcollate   datistemplate  datallowconn
template0  en_US.UTF-8  true           false
rdsadmin   en_US.UTF-8  false          true
template1  en_US.UTF-8  true           true
postgres   en_US.UTF-8  false          true
```

### Examples


The following example creates a new database.

```
CREATE DATABASE NewDatabase;
```

The following example creates a schema for user testing.

```
CREATE SCHEMA AUTHORIZATION joe;
```

The following example creates a schema, a table and a view.

```
CREATE SCHEMA world_flights
  CREATE TABLE flights (flight_id VARCHAR(10), departure DATE, airport VARCHAR(30))
  CREATE VIEW us_flights AS
    SELECT flight_id, departure FROM flights WHERE airport='United States';
```

For more information, see [CREATE DATABASE](https://www.postgresql.org/docs/13/sql-createdatabase.html) and [CREATE SCHEMA](https://www.postgresql.org/docs/13/sql-createschema.html) in the *PostgreSQL documentation*.

# Dynamic SQL for T-SQL


This topic provides reference information on migrating dynamic SQL functionality from Microsoft SQL Server 2019 to Amazon Aurora PostgreSQL. You can use this guide to understand how to adapt your dynamic SQL queries and commands when transitioning to PostgreSQL. The topic explains the differences in syntax and execution methods between the two database systems, offering practical examples for running SELECT queries, DML commands, and DDL statements dynamically in PostgreSQL.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Two star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-2.png)   |   ![\[Five star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-5.png)   |  N/A  |  Different paradigm and syntax require rewriting the application.  | 

## SQL Server Usage


Dynamic SQL is a feature that helps minimize hard-coded SQL. The SQL engine optimizes code, which leads to less hard parses.

Developers can use dynamic SQL to construct and run SQL queries at run time as a string, using some logic in SQL to construct varying query strings, without having to pre-construct them during development.

There are two options for running dynamic SQL: use the `EXECUTE` command or the `sp_executesql` function.

### EXECUTE Command


Use this option to run a command string within a T-SQL block, procedure, or function. You can also use the `EXECUTE` command with linked servers. You can define metadata for the result set using the `WITH RESULT SETS` options.

For parameters, use either the value or `@parameter_name=value`.

**Note**  
Make sure that you validate the structure of the string command before running it with the `EXECUTE` command.

 **Syntax** 

The following example shows the SQL Server syntax that runs a stored procedure or function.

```
[ { EXEC | EXECUTE } ]
  {
    [ @return_status = ]
    { module_name [ ;number ] | @module_name_var }
      [ [ @parameter = ] { value
        | @variable [ OUTPUT ]
        | [ DEFAULT ]
        }
      ]
    [ ,...n ]
    [ WITH <execute_option> [ ,...n ] ]
  }
[;]
```

The following example shows the SQL Server syntax that runs a character string.

```
{ EXEC | EXECUTE }
  ( { @string_variable | [ N ]'tsql_string' } [ + ...n ] )
  [ AS { LOGIN | USER } = ' name ' ]
[;]
```

The following example shows the SQL Server syntax that runs a pass-through command against a linked server.

```
{ EXEC | EXECUTE }
  ( { @string_variable | [ N ] 'command_string [ ? ]' } [ + ...n ]
    [ { , { value | @variable [ OUTPUT ] } } [ ...n ] ]
  )
  [ AS { LOGIN | USER } = ' name ' ]
  [ AT linked_server_name ]
[;]

<execute_option>::=
{
  RECOMPILE
  | { RESULT SETS UNDEFINED }
  | { RESULT SETS NONE }
  | { RESULT SETS ( <result_sets_definition> [,...n ] ) }
  }

<result_sets_definition> ::=
{
  (
    { column_name
    data_type
    [ COLLATE collation_name ]
    [ NULL | NOT NULL ] }
    [,...n ]
    )
  | AS OBJECT
    [ db_name . [ schema_name ] . | schema_name . ]
    {table_name | view_name | table_valued_function_name }
  | AS TYPE [ schema_name.]table_type_name
  | AS FOR XML
}
```

 **Example** 

The following example shows how to use `EXECUTE` to run a `tsql_string` function with a variable.

```
DECLARE @scm_name sysname;
DECLARE @tbl_name sysname;
EXECUTE ('DROP TABLE ' + @scm_name + '.' + @tbl_name + ';');
```

The following example shows how to use `EXECUTE AS USER` to switch context to another user.

```
DECLARE @scm_name sysname;
DECLARE @tbl_name sysname;
EXECUTE ('DROP TABLE ' + @scm_name + '.' + @tbl_name + ';') AS USER = 'SchemasAdmin';
```

The following example shows how to use `EXECUTE` with a result set.

```
EXEC GetMaxSalByDeptID 23
WITH RESULT SETS
(
  ([Salary] int NOT NULL)
);
```

### sp\$1executesql System Stored Procedure


This option runs a T-SQL command or block that you can run several times and build dynamically. You can also use this option with embedded parameters.

 **Syntax** 

The following example shows the `sp_executesql` syntax for SQL Server, Azure SQL Database, Azure SQL Data Warehouse, and Parallel Data Warehouse.

```
sp_executesql [ @stmt = ] statement
[
  { , [ @params = ] N'@parameter_name data_type [ OUT | OUTPUT ][ ,...n ]' }
    { , [ @param1 = ] 'value1' [ ,...n ] }
]
```

 **Example** 

The following example shows how to use `sp_executesql` to run a SELECT statement.

```
EXECUTE sp_executesql
  N'SELECT * FROM HR.Employees
  WHERE DeptID = @DID',
  N'@DID int',
  @DID = 23;
```

For more information, see [sp\$1executesql (Transact-SQL)](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-executesql-transact-sql?view=sql-server-2017) and [EXECUTE (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/language-elements/execute-transact-sql?view=sql-server-2017) in the *SQL Server documentation*.

## PostgreSQL Usage


The PostgreSQL `EXECUTE` command prepares and runs commands dynamically. The `EXECUTE` command can also run DDL statements and retrieve data using SQL commands. Similar to SQL Server, you can use the PostgreSQL `EXECUTE` command with bind variables.

Converting SQL Server dynamic SQL to PostgreSQL requires significant efforts.

 **Examples** 

The following example runs a SQL SELECT query with the table name as a dynamic variable using bind variables. This query returns the number of employees under a manager with a specific ID.

```
DO $$DECLARE
Tabname varchar(30) := 'employees';
num integer := 1;
cnt integer;
BEGIN
EXECUTE format('SELECT count(*) FROM %I WHERE manager = $1', tabname)
INTO cnt USING num;
RAISE NOTICE 'Count is % int table %', cnt, tabname;
END$$;
;
```

The following example runs a DML command; first with no variables and then with variables.

```
DO $$DECLARE
BEGIN
EXECUTE 'INSERT INTO numbers (a) VALUES (1)';
EXECUTE format('INSERT INTO numbers (a) VALUES (%s)', 42);
END$$;
;
```

**Note**  
 `%s` formats the argument value as a simple string. A null value is treated as an empty string. `%I` treats the argument value as an SQL identifier and double-quotes it if necessary. It is an error for the value to be null.

The following example runs a DDL command.

```
DO $$DECLARE
BEGIN
EXECUTE 'CREATE TABLE numbers (num integer)';
END$$;
;
```

For more information, see [String Functions and Operators](https://www.postgresql.org/docs/13/functions-string.html) in the *PostgreSQL documentation*.

### Prepare


Using a `PREPARE` statement can improve performance of reusable SQL statements.

The `PREPARE` command can receive a `SELECT`, `INSERT`, `UPDATE`, `DELETE`, or `VALUES` statement and parse it with a user-specified qualifying name so you can use the EXECUTE command later without the need to re-parse the SQL statement for each run.
+ When using `PREPARE` to create a prepared statement, it will be viable for the scope of the current session.
+ If a DDL command is run on a database object referenced by the prepared SQL statement, the next `EXECUTE` command requires a hard parse of the SQL statement.

 **Example** 

Use `PREPARE` and `EXECUTE` commands together. The SQL command is prepared with a user-specified qualifying name. You can run the SQL command several times8 without the need for re-parsing.

```
PREPARE numplan (int, text, bool) AS
INSERT INTO numbers VALUES($1, $2, $3);
EXECUTE numplan(100, 'New number 100', 't');
EXECUTE numplan(101, 'New number 101', 't');
EXECUTE numplan(102, 'New number 102', 'f');
EXECUTE numplan(103, 'New number 103', 't');
```

## Summary



| Functionality | SQL Server dynamic SQL | PostgreSQL EXECUTE and PREPARE | 
| --- | --- | --- | 
|  Run SQL with results and bind variables  |  <pre>DECLARE @sal int;<br />EXECUTE getSalary @sal OUTPUT;</pre>  |  <pre>EXECUTE format('select salary<br />  from employees<br />  WHERE %I = $1', col_name)<br />INTO amount USING col_val;</pre>  | 
|  Run DML with variables and bind variables  |  <pre>DECLARE @amount int<br />DECLARE @col_val int<br />DECLARE @col_name carchar(70)<br />DECLARE @sqlCommand varchar(1000)<br />SET @sqlCommand = 'UPDATE employees SET salary=salary'<br />  + @amount + ' WHERE ' + @col_name + '=' + @col_val<br />EXECUTE (@sqlCommand)</pre>  |  <pre>EXECUTE format('UPDATE employees SET salary = salary<br />  + $1 WHERE %I = $2', col_name) USING amount, col_val;</pre>  | 
|  Run DDL  |  <pre>EXECUTE ('CREATE TABLE link_emp (idemp1 integer, idemp2 integer);');</pre>  |  <pre>EXECUTE 'CREATE TABLE link_emp (idemp1 integer, idemp2 integer)';</pre>  | 
|  Run anonymous block  |  <pre>BEGIN ... END; DO $$DECLARE</pre>  |  <pre>BEGIN ... END$$;</pre>  | 

For more information, see [Basic Statements](https://www.postgresql.org/docs/13/plpgsql-statements.html) in the *PostgreSQL documentation*.

# Transactions for T-SQL


This topic provides reference information about transaction handling in Microsoft SQL Server and Amazon Aurora PostgreSQL, focusing on their similarities and differences. It explores the fundamental principles of database transactions, including ACID properties and isolation levels, and how they are implemented in both database systems. The topic compares the default behaviors, syntax variations, and supported features for managing transactions in SQL Server and PostgreSQL.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-3.png)   |   ![\[Three star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-3.png)   |   [Transaction Isolation](chap-sql-server-aurora-pg.tools.actioncode.md#chap-sql-server-aurora-pg.tools.actioncode.transactionisolation)   |  Nested transactions aren’t supported and syntax differences for initializing a transaction.  | 

## SQL Server Usage


A transaction is a unit of work performed on a database and typically represents a change in the database. Transactions serve the following purposes:
+ Provide units of work that enable recovery from logical or physical system failures while keeping the database in a consistent state.
+ Provide units of work that enable recovery from failures while keeping a database in a consistent state when a logical or physical system failure occurs.
+ Provide isolation between users and programs accessing a database concurrently.

Transactions are an all-or-nothing unit of work. Each transactional unit of work must either complete, or it must rollback all data changes. Also, transactions must be isolated from other transactions. The results of the view of data for each transaction must conform to the defined database isolation level.

Database transactions must comply with ACID properties.
+  **Atomic** — Transactions are all or nothing. If any part of the transaction fails, the entire transaction fails and the database remains unchanged.

  There are exceptions to this rule. For example, some constraint violations, for ANSI definitions, should not cause a transaction rollback.
+  **Consistent** — All transactions must bring the database from one valid state to another valid state. Data must be valid according to all defined rules, constraints, triggers, and so on.
+  **Isolation** — Concurrent run of transactions must result in a system state that would occur if transactions were run sequentially.

  There are several exceptions to this rule based on the lenience of the required isolation level.
+  **Durable** — After a transaction commits successfully and is acknowledged to the client, the engine must guarantee that its changes are persisted in the event of power loss, system crashes, or any other errors.

  By default, SQL Server uses the auto commit or implicit transactions mode set to `ON`. Every statement is treated as a transaction on its own unless a transaction was explicitly defined. This behavior is different than other engines like Oracle where, by default, every DML requires an explicit `COMMIT` statement to be persisted.

### Syntax


The following examples show the simplified syntax for the commands defining transaction boundaries.

Define the beginning of a transaction.

```
BEGIN TRAN | TRANSACTION [<transaction name>]
```

Commit work and the end of a transaction.

```
COMMIT WORK | [ TRAN | TRANSACTION [<transaction name>]]
```

Rollback work at the end of a transaction.

```
ROLLBACK WORK | [ TRAN | TRANSACTION [<transaction name>]]
```

SQL Server supports the standard ANSI isolation levels defined by the ANSI/ISO SQL standard (SQL92).

Each level provides a different approach for managing the concurrent run of transactions. The main purpose of a transaction isolation level is to manage the visibility of changed data as seen by other running transactions. Additionally, when concurrent transactions access the same data, the level of transaction isolation affects the way they interact with each other.
+  **Read uncommitted** — A current transaction can see uncommitted data from other transactions. If a transaction performs a rollback, all data is restored to its previous state.
+  **Read committed** — A transaction only sees data changes that were committed. Therefore, dirty reads aren’t possible. However, after issuing a commit, it would be visible to the current transaction while it’s still in a running state.
+  **Repeatable read** — A transaction sees data changes made by the other transactions only after both transactions issue a commit or are rolled back.
+  **Serializable** — This isolation level is the strictest because it doesn’t permit transaction overwrites of another transaction’s actions. Concurrent run of a set of serializable transactions is guaranteed to produce the same effect as running them sequentially in the same order.

The main difference between isolation levels is the phenomena they prevent from appearing. The three preventable phenomena are:
+  **Dirty reads** — A transaction can read data written by another transaction but not yet committed.
+  **Non-repeatable or fuzzy reads** — When reading the same data several times, a transaction can find the data has been modified by another transaction that has just committed. The same query ran twice can return different values for the same rows.
+  **Phantom or ghost reads** — Similar to a non-repeatable read, but it is related to new data created by another transaction. The same query ran twice can return different numbers of records.

The following table summarizes the four ANSI/ISO SQL standard (SQL92) isolation levels and indicates which phenomena are allowed or disallowed.


| Transaction isolation level | Dirty reads | Non-repeatable reads | Phantom reads | 
| --- | --- | --- | --- | 
|  Read uncommitted  |  Allowed  |  Allowed  |  Allowed  | 
|  Read committed  |  Disallowed  |  Allowed  |  Allowed  | 
|  Repeatable read  |  Disallowed  |  Disallowed  |  Allowed  | 
|  Serializable  |  Disallowed  |  Disallowed  |  Disallowed  | 

There are two common implementations for transaction isolation:
+  **Pessimistic isolation or locking** — Resources accessed by a transaction are locked for the duration of the transaction. Depending on the operation, resource, and transaction isolation level, other transactions can see changes made by the locking transaction, or they must wait for it to complete. With this mechanism, there is only one copy of the data for all transactions, which minimizes memory and disk resource consumption at the expense of transaction lock waits.
+  **Optimistic isolation (MVCC)** — Every transaction owns a set of the versions of the resources (typically rows) that it accessed. In this mode, transactions don’t have to wait for one another at the expense of increased memory and disk utilization. In this isolation mechanism, there is a chance that conflicts will arise when transactions attempt to commit. In case of a conflict, the application needs to be able to handle the rollback, and attempt a retry.

SQL Server implements both mechanisms. You can use them concurrently.

For optimistic isolation, SQL Server introduced two additional isolation levels: read-committed snapshot and snapshot.

Set the transaction isolation level using `SET` command. It affects the current run scope only.

```
SET TRANSACTION ISOLATION LEVEL { READ UNCOMMITTED | READ COMMITTED | REPEATABLE READ | SNAPSHOT | SERIALIZABLE }
```

### Examples


The following example runs two DML statements within a serializable transaction.

```
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
INSERT INTO Table1
VALUES (1, 'A');
UPDATE Table2
  SET Column1 = 'Done'
WHERE KeyColumn = 1;
COMMIT TRANSACTION;
```

For more information, see [Transaction Isolation Levels (ODBC)](https://docs.microsoft.com/en-us/sql/odbc/reference/develop-app/transaction-isolation-levels?view=sql-server-ver15) and [SET TRANSACTION ISOLATION LEVEL (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/statements/set-transaction-isolation-level-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


As with SQL Server, the same ANSI/ISO SQL (SQL92) isolation levels apply to PostgreSQL, but with several similarities and some differences.


| Transaction isolation level | Dirty reads | Non-repeatable reads | Phantom reads | 
| --- | --- | --- | --- | 
|  Read uncommitted  |  Permitted but not implemented  |  Permitted  |  Permitted  | 
|  Read committed  |  Not permitted  |  Permitted  |  Permitted  | 
|  Repeatable read  |  Not permitted  |  Not permitted  |  Permitted but not implemented  | 
|  Serializable  |  Not permitted  |  Not permitted  |  Not permitted  | 

PostgreSQL technically supports the use of any of the four transaction isolation levels, but only three can practically be used. The Read-Uncommitted isolation level serves as read-committed.

The way the repeatable-read isolation-level is implemented doesn’t allow for phantom reads, which is similar to the Serializable isolation-level. The primary difference between repeatable-read and serializable is that serializable guarantees that the result of concurrent transactions are precisely the same as if they were run serially, which isn’t always true for repeatable-reads.

Starting with PostgreSQL 12, you can add the `AND CHAIN` option to `COMMIT` or `ROLLBACK` commands to immediately start another transaction with the same parameters as preceding transaction.

### Multiversion Concurrency Control


In PostgreSQL, the multiversion concurrency control (MVCC) mechanism allows transactions to work with a consistent snapshot of data ignoring changes made by other transactions that have not yet committed or rolled back. Each transaction sees a snapshot of accessed data accurate to its run start time regardless of what other transactions are doing concurrently.

### Isolation Levels


PostgreSQL supports the read-committed, repeatable reads, and serializable isolation levels. Read-committed is the default isolation level.
+  **Read-committed** — The default PostgreSQL transaction isolation level. It prevents sessions from seeing data from concurrent transactions until it is committed. Dirty reads aren’t permitted.
+  **Repeatable read** — Queries can only see rows committed before the first query or DML statement was run in the transaction.
+  **Serializable** — Provides the strictest transaction isolation level. The Serializable isolation level assures that the result of the concurrent transactions will be the same as if they run serially. This isn’t always the case for the repeatable read isolation level.

### Setting Isolation Levels in Aurora PostgreSQL


You can configure isolation levels at several levels.
+ Session level.
+ Transaction level.
+ Instance level using Aurora parameter groups.

### Syntax


```
SET TRANSACTION transaction_mode [...]
SET TRANSACTION SNAPSHOT snapshot_id
SET SESSION CHARACTERISTICS AS TRANSACTION transaction_mode [...]

where transaction_mode is one of:

ISOLATION LEVEL {
  SERIALIZABLE | REPEATABLE READ | READ COMMITTED | READ UNCOMMITTED
}
READ WRITE | READ ONLY [ NOT ] DEFERRABLE
```

### Examples


The following example configures the isolation level for a specific transaction.

```
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
```

The following example configures the isolation level for a specific session.

```
SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL READ COMMITTED;
SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL REPEATABLE READ;
```

Use the following example to view the current isolation level.

```
SELECT CURRENT_SETTING('TRANSACTION_ISOLATION'); -- Session
SHOW DEFAULT_TRANSACTION_ISOLATION; -- Instance
```

You can use parameter groups to modify instance-level parameters for Aurora PostgreSQL. For example, you can alter the `default_transaction_isolation` parameter using the AWS Console or the AWS CLI. For more information, see [Working with parameter groups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html#USER_WorkingWithParamGroups.Modifying).

### Comparison table of relevant database features related to transactions



| Database feature | SQL Server | PostgreSQL | 
| --- | --- | --- | 
|  AutoCommit  |  Off  |  Autocommit is turned off by default, however, some client tools like psql and more are setting this to ON by default. Check your client tool defaults or run the following command to check current configuration in psql: `\echo :AUTOCOMMIT`.  | 
|  MVCC  |  Yes  |  Yes  | 
|  Default isolation level  |  Read-committed  |  Read-committed  | 
|  Supported isolation levels  |  REPEATABLE READ, READ COMMITTED, READ UNCOMMITTED, SERIALIZABLE  |  Repeatable reads, serializable, read-only  | 
|  Configure session isolation levels  |  Yes  |  Yes  | 
|  Configure transaction isolation levels  |  Yes  |  Yes  | 

### Read-Committed Isolation Level



| TX1 | TX2 | Comment | 
| --- | --- | --- | 
|  <pre>SELECT employee_id, salary<br />FROM EMPLOYEES<br />WHERE employee_id=100;<br /><br />employee_id  salary<br />100          24000.00</pre>  |  <pre>select employee_id, salary from<br />EMPLOYEES<br />where employee_id=100;<br /><br />employee_id  salary<br />100          24000.00</pre>  |  Same results returned from both sessions.  | 
|  <pre>begin;<br />UPDATE employees<br />SET salary=27000<br />WHERE employee_id=100;</pre>  |  <pre>begin;<br />set transaction isolation level<br />read committed;</pre>  |  TX1 starts a transaction and performs an update. TX2 starts a transaction with read-committed isolation level.  | 
|  <pre>SELECT employee_id, salary<br />FROM EMPLOYEES<br />WHERE employee_id=100;<br /><br />employee_id  salary<br />100          27000.00</pre>  |  <pre>SELECT employee_id, salary<br />FROM EMPLOYEES<br />WHERE employee_id=100;<br /><br />employee_id  salary<br />100          24000.00</pre>  |  TX1 will see the modified results while TX2 sees the original data.  | 
|  |  <pre>UPDATE employees<br />SET salary=29000<br />WHERE employee_id=100;</pre>  |  Waits because TX2 is blocked by TX1.  | 
|  <pre>Commit;</pre>  |  |  TX1 issues a commit, and the lock is released.  | 
|  |  <pre>Commit;</pre>  |  TX2 issues a commit.  | 
|  <pre>SELECT employee_id, salary<br />FROM EMPLOYEES<br />WHERE employee_id=100;<br /><br />employee_id  salary<br />100          29000.00</pre>  |  <pre>SELECT employee_id, salary<br />FROM EMPLOYEES<br />WHERE employee_id=100;<br /><br />employee_id  salary<br />100          29000.00</pre>  |  Both queries return the updated value.  | 

### Serializable Isolation Level



| TX1 | TX2 | Comment | 
| --- | --- | --- | 
|  <pre>SELECT employee_id, salary<br />FROM EMPLOYEES<br />WHERE employee_id=100;<br /><br />employee_id  salary<br />100          24000.00</pre>  |  <pre>SELECT employee_id, salary<br />FROM EMPLOYEES<br />WHERE employee_id=100;<br /><br />employee_id  salary<br />100          24000.00</pre>  |  Same results returned from both sessions.  | 
|  <pre>begin;<br />UPDATE employees<br />SET salary=27000<br />WHERE employee_id=100;</pre>  |  <pre>begin;<br />set transaction isolation level serializable;</pre>  |  TX1 starts a transaction and performs an update. TX2 starts a transaction with isolation level of serializable.  | 
|  <pre>SELECT employee_id, salary<br />FROM EMPLOYEES<br />WHERE employee_id=100;<br /><br />employee_id  salary<br />100          27000.00</pre>  |  <pre>SELECT employee_id, salary<br />FROM EMPLOYEES<br />WHERE employee_id=100;<br /><br />employee_id  salary<br />100          24000.00</pre>  |  TX1 will see the modified results while TX2 sees the original data.  | 
|  |  <pre>update employees<br />set salary=29000<br />where employee_id=100;</pre>  |  Waits because TX2 is blocked by TX1.  | 
|  <pre>Commit;</pre>  |  |  TX1 issues a commit, and the lock is released.  | 
|  |  ERROR: couldn’t serialize access due to concurrent update.  |  TX2 received an error message.  | 
|  |  <pre>Commit;<br />ROLLBACK</pre>  |  TX2 trying to issue a commit but receives a rollback message, the transaction failed due to the serializable isolation level.  | 
|  <pre>SELECT employee_id, salary<br />FROM EMPLOYEES<br />WHERE employee_id=100;<br /><br />employee_id  salary<br />100          27000.00</pre>  |  <pre>SELECT employee_id, salary<br />FROM EMPLOYEES<br />WHERE employee_id=100;<br /><br />employee_id  salary<br />100          27000.00</pre>  |  Both queries return the value updated according to TX1.  | 

## Summary



| Transaction property | SQL Server |  Aurora PostgreSQL  | 
| --- | --- | --- | 
|  Default isolation level  |   `READ COMMITTED`   |   `READ COMMITTED`   | 
|  Initialize transaction syntax  |   `BEGIN TRAN` or `TRANSACTION`   |   `SET TRANSACTION`   | 
|  Default isolation mechanism  |  Pessimistic lock based  |  Lock based for writes, consistent read for selects  | 
|  Commit transaction  |  <pre>COMMIT<br />[WORK|TRAN|TRANSACTION]</pre>  |  <pre>COMMIT<br />[ WORK | TRANSACTION ]</pre>  | 
|  Rollback transaction  |  <pre>ROLLBACK [WORK |[ TRAN | TRANSACTION]</pre>  |  <pre>ROLLBACK [ WORK | TRANSACTION ]</pre>  | 
|  Set autocommit off/on  |  <pre>SET IMPLICIT_TRANSACTIONS OFF | ON</pre>  |  <pre>SET AUTOCOMMIT { = | TO } { ON | OFF }</pre>  | 
|  ANSI isolation  |  <pre>REPEATABLE READ | READ COMMITTED | READ UNCOMMITTED | SERIALIZABLE</pre>  |  <pre>REPEATABLE READ | READ COMMITTED | READ UNCOMMITTED | SERIALIZABLE</pre>  | 
|  MVCC  |  <pre>SNAPSHOT and READ<br />COMMITTED SNAPSHOT</pre>  |  <pre>READ COMMITTED SNAPSHOT</pre>  | 
|  Nested transactions  |  Supported, view level with `@@trancount`   |  Not Supported  | 

For more information, see [Transactions](https://www.postgresql.org/docs/13/tutorial-transactions.html), [Transaction Isolation](https://www.postgresql.org/docs/13/transaction-iso.html), and [SET TRANSACTION](https://www.postgresql.org/docs/13/sql-set-transaction.html) in the *PostgreSQL documentation*.

# Synonyms for T-SQL


This topic provides reference information about the differences in synonym functionality between Microsoft SQL Server 2019 and Amazon Aurora PostgreSQL. You can understand how SQL Server uses synonyms as alternative identifiers for database objects and how this feature is not directly supported in PostgreSQL. The topic explains the purpose and benefits of synonyms in SQL Server, such as providing an abstraction layer and simplifying the use of four-part identifiers for remote instances.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Two star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-2.png)   |   ![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-0.png)   |  N/A  |  PostgreSQL doesn’t support synonyms. There is an available workaround.  | 

## SQL Server Usage


Synonyms are database objects that serve as alternative identifiers for other database objects. The referenced database object is called the base object and may reside in the same database, another database on the same instance, or a remote server.

Synonyms provide an abstraction layer to isolate client application code from changes to the name or location of the base object.

In SQL Server, synonyms are often used to simplify the use of four-part identifiers when accessing remote instances.

For example, the table A resides on the server A, and the client application accesses it directly. For scale out reasons, the table A needs to be moved to the server B to offload resource consumption on the server A. Without synonyms, the client application code must be rewritten to access the server B. Instead, you can create a synonym called Table A and it will transparently redirect the calling application to the server B without any code changes.

You can create synonyms for the following objects:
+ Assembly (CLR) stored procedures, table-valued functions, scalar functions, and aggregate functions.
+ Replication-filter-procedures.
+ Extended stored procedures.
+ SQL scalar functions, table-valued functions, inline-tabled-valued functions, views, and stored procedures.
+ User-defined tables including local and global temporary tables.

### Syntax


```
CREATE SYNONYM [ <Synonym Schema> ] . <Synonym Name>
FOR [ <Server Name> ] . [ <Database Name> ] . [ Schema Name> ] . <Object Name>
```

### Examples


The following example creates a synonym for a local object in a separate database.

```
CREATE TABLE DB1.Schema1.MyTable
(
KeyColumn INT IDENTITY PRIMARY KEY,
DataColumn VARCHAR(20) NOT NULL
);

USE DB2;
CREATE SYNONYM Schema2.MyTable
FOR DB1.Schema1.MyTable
```

The following example creates a synonym for a remote object.

```
-- On ServerA
CREATE TABLE DB1.Schema1.MyTable
(
KeyColumn INT IDENTITY PRIMARY KEY,
DataColumn VARCHAR(20) NOT NULL
);

-- On Server B
USE DB2;
CREATE SYNONYM Schema2.MyTable
FOR ServerA.DB1.Schema1.MyTable;
```

The example preceding assumes a linked server named ServerA exists on Server B that points to Server A.

For more information, see [CREATE SYNONYM (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/statements/create-synonym-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


SQL Server synonyms are often used to give another name for an object. PostgreSQL doesn’t provide a feature comparable to SQL Server Synonyms. However, you can achieve similar functionality by using a few PostgreSQL objects.

 AWS SCT converts different source databases into one target database. Each source database becomes a schema in the new target database. AWS SCT adds the name of the source schemas as a prefix to the name of the target database schema. If you migrate several databases as part of one migration project, then you can avoid using synonyms because all converted objects are in the same database.

This lack of functionality in PostgreSQL adds a manual dimension to the migration process of SQL Server synonyms. Make sure that your database user has privileges on the base object and the relevant PostgreSQL options.

### Examples


To create a synonym of a table in PostgreSQL, use views.

The first step is to create a table that will be used as the base object, and on top of it, a view that will be used as synonym.

```
CREATE TABLE target_db_name.DB1_Schema1.MyTable
(
  KeyColumn NUMERIC PRIMARY KEY,
  DataColumn VARCHAR(20) NOT NULL
);

CREATE VIEW target_db_name.DB2_Schema2.MyTable_Syn
AS SELECT * FROM target_db_name.DB1_Schema1.MyTable
```

For more information, see [Views](chap-sql-server-aurora-pg.sql.views.md).

To create a synonym of a user-defined type in PostgreSQL, another user-defined type should be used to wrap the source type.

The first step is to create the user-defined type that will be used as the base object, and on top of it, a user-defined type that will be used as the synonym.

```
CREATE TYPE DB1.Schema1.MyType AS (
ID NUMERIC,
name CHARACTER VARYING(100));

CREATE TYPE DB2.Schema2.MyType_Syn AS (
udt DB1.Schema1.MyT);
```

For more information, see [User-Defined Types](chap-sql-server-aurora-pg.tsql.udt.md).

To create a synonym for a function in PostgreSQL, another function should be used to wrap the source type.

As before, the first step is to create the function that will be used as the base object. And then, on top of it, create a function that will be used as the synonym.

```
CREATE OR REPLACE FUNCTION DB1.Schema1.MyFunc (P_NUM NUMERIC)
RETURNS numeric AS $$
begin
  RETURN P_NUM * 2;
END; $$
LANGUAGE PLPGSQL;

CREATE OR REPLACE FUNCTION DB2.Schema2.MyFunc_Syn (P_NUM NUMERIC)
RETURNS numeric AS $$
begin
  RETURN DB1.Schema1.MyFunc(P_NUM);
END; $$
LANGUAGE PLPGSQL;
```

For more information, see [User-Defined Functions](chap-sql-server-aurora-pg.tsql.udf.md).

# Delete and update from for T-SQL


This topic provides reference information about feature compatibility between Microsoft SQL Server 2019 and Amazon Aurora PostgreSQL, specifically focusing on DELETE and UPDATE statements with JOINs. You can understand the differences in syntax and functionality when migrating from SQL Server to Aurora PostgreSQL. The topic highlights that while SQL Server supports an extended syntax for DELETE and UPDATE statements with additional FROM clauses, Aurora PostgreSQL has limitations in this area.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-3.png)   |   ![\[Three star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-3.png)   |  N/A  |  PostgreSQL doesn’t support `DELETE …​ FROM from_list`. Rewrite to use subqueries.  | 

## SQL Server Usage


SQL Server supports an extension to the ANSI standard that allows using an additional `FROM` clause in `UPDATE` and `DELETE` statements.

You can use this additional `FROM` clause to limit the number of modified rows by joining the table being updated, or deleted from, to one or more other tables. This functionality is similar to using a WHERE clause with a derived table sub-query. For `UPDATE`, you can use this syntax to set multiple column values simultaneously without repeating the sub-query for every column.

However, these statements can introduce logical inconsistencies if a row in an updated table is matched to more than one row in a joined table. The current implementation chooses an arbitrary value from the set of potential values and is non-deterministic.

### Syntax


```
UPDATE <Table Name>
SET <Column Name> = <Expression> ,...
FROM <Table Source>
WHERE <Filter Predicate>;
```

```
DELETE FROM <Table Name>
FROM <Table Source>
WHERE <Filter Predicate>;
```

### Examples


The following example deletes customers with no orders.

```
CREATE TABLE Customers
(
  Customer VARCHAR(20) PRIMARY KEY
);
```

```
INSERT INTO Customers VALUES
('John'),
('Jim'),
('Jack')
```

```
CREATE TABLE Orders
(
  OrderID INT NOT NULL PRIMARY KEY,
  Customer VARCHAR(20) NOT NULL,
  OrderDate DATE NOT NULL
);
```

```
INSERT INTO Orders (OrderID, Customer, OrderDate) VALUES
(1, 'Jim', '20180401'),
(2, 'Jack', '20180402');
```

```
DELETE FROM Customers
FROM Customers AS C
  LEFT OUTER JOIN
  Orders AS O
  ON O.Customer = C.Customer
WHERE O.OrderID IS NULL;
```

```
SELECT *
FROM Customers;
```

```
Customer
Jim
Jack
```

The following example updates multiple columns in Orders based on the values in OrderCorrections.

```
CREATE TABLE OrderCorrections
(
  OrderID INT NOT NULL PRIMARY KEY,
  Customer VARCHAR(20) NOT NULL,
  OrderDate DATE NOT NULL
);
```

```
INSERT INTO OrderCorrections
VALUES (1, 'Jack', '20180324');
```

```
UPDATE O
SET Customer = OC.Customer,
  OrderDate = OC.OrderDate
FROM Orders AS O
  INNER JOIN
  OrderCorrections AS OC
  ON O.OrderID = OC.OrderID;
```

```
SELECT *
FROM Orders;
```

```
Customer  OrderDate
Jack      2018-03-24
Jack      2018-04-02
```

For more information, see [UPDATE (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/queries/update-transact-sql?view=sql-server-ver15), [DELETE (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/statements/delete-transact-sql?view=sql-server-ver15), and [FROM clause plus JOIN, APPLY, PIVOT (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/queries/from-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


 Aurora PostgreSQL doesn’t support the `DELETE..FROM` syntax, but it support the `UPDATE FROM` syntax.

### Syntax


```
[ WITH [ RECURSIVE ] with_query [, ...] ]
UPDATE [ ONLY ] table_name [ * ] [ [ AS ] alias ]
  SET { column_name = { expression | DEFAULT } |
    ( column_name [, ...] ) = ( { expression | DEFAULT } [, ...] ) |
    ( column_name [, ...] ) = ( sub-SELECT )
  } [, ...]
  [ FROM from_list ]
  [ WHERE condition | WHERE CURRENT OF cursor_name ]
  [ RETURNING * | output_expression [ [ AS ] output_name ] [, ...] ]
```

### Migration Considerations


You can rewrite the `DELETE` statements as subqueries. Place the subqueries in the WHERE clause. This workaround is simple and, in most cases, easier to read and understand.

### Examples


The following example deletes customers with no orders.

```
CREATE TABLE Customers
(
  Customer VARCHAR(20) PRIMARY KEY
);

INSERT INTO Customers
VALUES
('John'),
('Jim'),
('Jack')

CREATE TABLE Orders
(
  OrderID INT NOT NULL PRIMARY KEY,
  Customer VARCHAR(20) NOT NULL,
  OrderDate DATE NOT NULL
);

INSERT INTO Orders (OrderID, Customer, OrderDate)
VALUES
(1, 'Jim', '20180401'),
(2, 'Jack', '20180402');

DELETE FROM Customers
WHERE Customer NOT IN (
  SELECT Customer
  FROM Orders
);

SELECT * FROM Customers;

Customer
Jim
Jack
```

The following example updates multiple columns in Orders based on the values in OrderCorrections

```
CREATE TABLE OrderCorrections
(
  OrderID INT NOT NULL PRIMARY KEY,
  Customer VARCHAR(20) NOT NULL,
  OrderDate DATE NOT NULL
);

INSERT INTO OrderCorrections
VALUES (1, 'Jack', '20180324');

UPDATE orders
SET Customer = OC.Customer,
  OrderDate = OC.OrderDate
FROM Orders AS O
  INNER JOIN
  OrderCorrections AS OC
  ON O.OrderID = OC.OrderID;

SELECT *
FROM Orders;

Customer  OrderDate
Jack      2018-03-24
Jack      2018-04-02
```

## Summary


The following table identifies similarities, differences, and key migration considerations.


| Feature | SQL Server |  Aurora PostgreSQL  | 
| --- | --- | --- | 
|  Join as part of `DELETE`   |   `DELETE FROM …​ FROM`   |  Not available. Rewrite to use WHERE clause with a sub-query.  | 
|  Join as part of `UPDATE`   |   `UPDATE …​ FROM`   |   `UPDATE …​ FROM`   | 

For more information, see [DELETE](https://www.postgresql.org/docs/13/sql-delete.html) and [UPDATE](https://www.postgresql.org/docs/13/sql-update.html) in the *PostgreSQL documentation*.

# Stored procedures for T-SQL


This topic provides reference information about the compatibility and differences between stored procedures in Microsoft SQL Server 2019 and Amazon Aurora PostgreSQL. You can use this guide to understand the key distinctions in syntax, security contexts, parameter handling, and supported features when migrating stored procedures from SQL Server to Aurora PostgreSQL.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-3.png)   |   ![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-4.png)   |   [Stored Procedures](chap-sql-server-aurora-pg.tools.actioncode.md#chap-sql-server-aurora-pg.tools.actioncode.storedprocedures)   |  Syntax and option differences.  | 

## SQL Server Usage


Stored procedures are encapsulated, persisted code modules that you can run using the `EXECUTE` T-SQL statement. They may have multiple input (`IN`) and output (`OUT`) parameters. Table-valued user-defined types can be used as input parameters. `IN` is the default direction for parameters, but `OUT` must be explicitly specified. You can specify parameters as both `IN` and `OUT`.

SQL Server allows you to run stored procedures in any security context using the `EXECUTE AS` option. You can explicitly recompile them for every run using the `RECOMPILE` option. You can encrypt them in the database using the `ENCRYPTION` option to prevent unauthorized access to the source code.

SQL Server provides a unique feature that allows you to use a stored procedure as an input to an INSERT statement. When using this feature, only the first row in the data set returned by the stored procedure is evaluated.

### Syntax


```
CREATE [ OR ALTER ] { PROC | PROCEDURE } <Procedure Name>
[<Parameter List>
[ WITH [ ENCRYPTION ]|[ RECOMPILE ]|[ EXECUTE AS ...]]
AS {
[ BEGIN ]
<SQL Code Body>
[ END ] }[;]
```

### Examples


 **Create and run a stored procedure** 

The following example creates a simple parameterized stored procedure to validate the basic format of an email.

```
CREATE PROCEDURE ValidateEmail
@Email VARCHAR(128), @IsValid BIT = 0 OUT
AS
BEGIN
IF @Email LIKE N'%@%' SET @IsValid = 1
ELSE SET @IsValid = 0
RETURN @IsValid
END;
```

The following example runs this stored procedure.

```
DECLARE @IsValid BIT
EXECUTE [ValidateEmail]
@Email = 'X@y.com', @IsValid = @IsValid OUT;
SELECT @IsValid;

-- Returns 1
```

```
EXECUTE [ValidateEmail]
@Email = 'Xy.com', @IsValid = @IsValid OUT;
SELECT @IsValid;

-- Returns 0
```

The following example creates a stored procedure that uses `RETURN` to pass an error value to the application.

```
CREATE PROCEDURE ProcessImportBatch
@BatchID INT
AS
BEGIN
BEGIN TRY
EXECUTE Step1 @BatchID
EXECUTE Step2 @BatchID
EXECUTE Step3 @BatchID
END TRY
BEGIN CATCH
IF ERROR_NUMBER() = 235
RETURN -1 -- indicate special condition
ELSE
THROW -- handle error normally
END CATCH
END
```

 **Using a table-valued input parameter** 

The following example creates and populates an OrderItems table.

```
CREATE TABLE OrderItems(
OrderID INT NOT NULL,
Item VARCHAR(20) NOT NULL,
Quantity SMALLINT NOT NULL,
PRIMARY KEY(OrderID, Item)
);
```

```
INSERT INTO OrderItems (OrderID, Item, Quantity)
VALUES
(1, 'M8 Bolt', 100),
(2, 'M8 Nut', 100),
(3, 'M8 Washer', 200),
(3, 'M6 Washer', 100);
```

The following example creates a table-valued type for the `OrderItem` table-valued parameter.

```
CREATE TYPE OrderItems
AS TABLE
(
  OrderID INT NOT NULL,
  Item VARCHAR(20) NOT NULL,
  Quantity SMALLINT NOT NULL,
  PRIMARY KEY(OrderID, Item)
);
```

The following example creates a procedure to process order items.

```
CREATE PROCEDURE InsertOrderItems
@OrderItems AS OrderItems READONLY
AS
BEGIN
  INSERT INTO OrderItems(OrderID, Item, Quantity)
  SELECT OrderID,
    Item,
    Quantity
  FROM @OrderItems
END;
```

The following example populates the table-valued variable and passes the data set to the stored procedure.

```
DECLARE @OrderItems AS OrderItems;

INSERT INTO @OrderItems ([OrderID], [Item], [Quantity])
VALUES
(1, 'M8 Bolt', 100),
(1, 'M8 Nut', 100),
(1, M8 Washer, 200);

EXECUTE [InsertOrderItems]
@OrderItems = @OrderItems;

(3 rows affected)
   Item       Quantity
1  M8 Bolt    100
2  M8 Nut     100
3  M8 Washer  200
```

### INSERT…​ EXEC Syntax


```
INSERT INTO <MyTable>
EXECUTE <MyStoredProcedure>;
```

For more information, see [CREATE PROCEDURE (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/statements/create-procedure-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


PostgreSQL version 10 provides support for both stored procedures and stored functions using the `CREATE FUNCTION` statement. To emphasize, only the `CREATE FUNCTION` is supported by the procedural statements used by PostgreSQL version 10. The `CREATE PROCEDURE` statement isn’t supported.

PL/pgSQL is the main database programming language used for migrating from SQL Server T-SQL code. PostgreSQL supports these additional programming languages, also available in Amazon Aurora PostgreSQL:
+ PL/pgSQL
+ PL/Tcl
+ PL/Perl

Use the `show.rds.extensions` command to view all available Amazon Aurora extensions.

### PostgreSQL Create Function Privileges


To create a function, make sure that a user has the `USAGE` privilege on the language. When you create a function, you can specify a language parameter as shown in the following examples.

### Examples


The following example creates a new FUNC\$1ALG function.

```
CREATE OR REPLACE FUNCTION FUNC_ALG(P_NUM NUMERIC)
RETURNS NUMERIC
AS $$
BEGIN
  RETURN P_NUM * 2;
END; $$
LANGUAGE PLPGSQL;
```

The `CREATE OR REPLACE` statement creates a new function or replaces an existing function with these limitations:
+ You can’t change the function name or argument types.
+ The statement doesn’t allow changing the existing function return type.
+ The user must own the function to replace it.
+ The `P_NUM` INPUT parameter is implemented similar to SQL Server T-SQL INPUT parameter.
+ The double dollar signs alleviate the need to use single-quoted string escape elements. With the double dollar sign, there is no need to use escape characters in the code when using single quotation marks. The double dollar sign appears after the keyword `AS` and after the function keyword `END`.
+ Use the `LANGUAGE PLPGSQL` parameter to specify the language for the created function.

The following example creates a function with PostgreSQL PL/pgSQL.

```
CREATE OR REPLACE FUNCTION EMP_SAL_RAISE
(IN P_EMP_ID DOUBLE PRECISION, IN SAL_RAISE DOUBLE PRECISION)
RETURNS VOID
AS $$
DECLARE
V_EMP_CURRENT_SAL DOUBLE PRECISION;
BEGIN
SELECT SALARY INTO STRICT V_EMP_CURRENT_SAL
FROM EMPLOYEES WHERE EMPLOYEE_ID = P_EMP_ID;

UPDATE EMPLOYEES SET SALARY = V_EMP_CURRENT_SAL + SAL_RAISE WHERE EMPLOYEE_ID = P_EMP_ID;

RAISE DEBUG USING MESSAGE := CONCAT_WS('', 'NEW SALARY FOR EMPLOYEE ID: ', P_EMP_ID, '
IS ', (V_EMP_CURRENT_SAL + SAL_RAISE));
EXCEPTION
WHEN OTHERS THEN
RAISE USING ERRCODE := '20001', MESSAGE := CONCAT_WS('', 'AN ERROR WAS ENCOUNTERED -', SQLSTATE, ' -ERROR-', SQLERRM);
END; $$
LANGUAGE PLPGSQL;

select emp_sal_raise(200, 1000);
```

In the preceding example, you can replace the `RAISE` command with `RETURN` to inform the application that an error occurred.

The following example creates a function with PostgreSQL PL/pgSQL.

```
CREATE OR REPLACE FUNCTION EMP_PERIOD_OF_SERVICE_YEAR (IN P_EMP_ID DOUBLE PRECISION)
RETURNS DOUBLE PRECISION
AS $$
DECLARE
V_PERIOD_OF_SERVICE_YEARS DOUBLE PRECISION;
BEGIN
SELECT
EXTRACT (YEAR FROM NOW()) - EXTRACT (YEAR FROM (HIRE_DATE))
INTO STRICT V_PERIOD_OF_SERVICE_YEARS
FROM EMPLOYEES
WHERE EMPLOYEE_ID = P_EMP_ID;
RETURN V_PERIOD_OF_SERVICE_YEARS;
END; $$
LANGUAGE PLPGSQL;

SELECT EMPLOYEE_ID,FIRST_NAME, EMP_PERIOD_OF_SERVICE_YEAR(EMPLOYEE_ID) AS
PERIOD_OF_SERVICE_YEAR
FROM EMPLOYEES;
```

There is a new behavior in PostgreSQL version 10 for a set-returning function, used by `LATERAL FROM` clause.

 **PostgreSQL version 9.6 and lower** 

```
CREATE TABLE emps (id int, manager int);
INSERT INTO tab VALUES (23, 24), (52, 23), (21, 65);
SELECT x, generate_series(1,5) AS g FROM tab;

id  g
23  1
23  2
23  3
23  4
23  5
52  1
52  2
52  3
52  4
52  5
21  1
21  2
21  3
21  4
21  5
```

 **PostgreSQL version 10 and higher** 

```
SELECT id, g FROM emps, LATERAL generate_series(1,5) AS g;

id  g
23  1
23  2
23  3
23  4
23  5
52  1
52  2
52  3
52  4
52  5
21  1
21  2
21  3
21  4
21  5
```

In the preceding example, you can put the set-return function on the outside of the nested loop join because it has no actual lateral dependency on `emps` table.

## Summary


The following table summarizes the differences between stored procedures in SQL Server and PostgreSQL.


| Feature | SQL Server |  Aurora PostgreSQL  | Workaround | 
| --- | --- | --- | --- | 
|  General CREATE syntax differences  |  <pre>CREATE PROC|PROCEDURE<br /><Procedure Name><br />@Parameter1 <Type>, ...n<br />AS<br /><Body></pre>  |  <pre>CREATE [ OR REPLACE] FUNCTION<br /><Function Name> (Parameter1 <Type>, ...n)<br />AS $$<br /><body></pre>  |  Rewrite stored procedure creation scripts to use `FUNCTION` instead of `PROC` or `PROCEDURE`. Rewrite stored procedure creation scripts to omit the `AS $$` pattern. Rewrite stored procedure parameters to not use the `@` symbol in parameter names. Add parentheses around the parameter declaration.  | 
|  Security context  |  <pre>{ EXEC | EXECUTE } AS<br />{ CALLER | SELF | OWNER<br />| 'user_name' }</pre>  |  <pre>SECURITY INVOKER | SECURITY DEFINER</pre>  |  For stored procedures that use an explicit user name, rewrite the code from `EXECUTE AS user` to `SECURITY DEFINER` and recreate the functions with this user. For stored procedures that use the `CALLER` option, rewrite the code to include `SECURITY INVOKER`. For stored procedures that use the `SELF` option, rewrite the code to `SECURITY DEFINER`.  | 
|  Encryption  |  Use the `WITH ENCRYPTION` option.  |  Not supported in Aurora PostgreSQL.  |  | 
|  Parameter direction  |   `IN` and `OUT\|OUTPUT`, by default `OUT` can be used as `IN` as well.  |   `IN`, `OUT`, `INOUT`, or `VARIADIC`   |  Although the functionality of these parameters is the same for SQL Server and PostgreSQL, rewrite the code for syntax compliance. Use `OUT` instead of `OUTPUT`. Use `INOUT` instead of `OUT` for bidirectional parameters.  | 
|  Recompile  |  Use the `WITH RECOMPILE` option.  |  Not supported in Aurora PostgreSQL.  |  | 
|  Table-valued parameters  |  Use declared table type user-defined parameters.  |  Use declared table type user-defined parameters.  |  | 
|  Additional restrictions  |  Use `BULK INSERT` to load data from text file.  |  Not supported in Aurora PostgreSQL.  |  | 

For more information, see [CREATE FUNCTION](https://www.postgresql.org/docs/13/sql-createfunction.html), [PL/pgSQL — SQL Procedural Language](https://www.postgresql.org/docs/13/plpgsql.html), [Procedural Languages](https://www.postgresql.org/docs/13/xplang.html), and [Query Language (SQL) Functions](https://www.postgresql.org/docs/13/xfunc-sql.html) in the *PostgreSQL documentation*.

# Error handling for T-SQL


This topic provides reference information about error handling in SQL Server and Amazon Aurora PostgreSQL, focusing on the differences and similarities between the two systems. You can use this knowledge to understand how error handling mechanisms in SQL Server translate to Aurora PostgreSQL when migrating your database. The topic compares specific error handling features, such as TRY…​CATCH blocks and THROW statements, with their PostgreSQL equivalents.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Two star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-2.png)   |   ![\[Three star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-3.png)   |  N/A  |  Different paradigm and syntax will require rewrite of error handling code.  | 

## SQL Server Usage


SQL Server error handling capabilities have significantly improved throughout the years. However, previous features are retained for backward compatibility.

Before SQL Server 2008, only very basic error handling features were available. `RAISERROR` was the primary statement used for error handling.

Starting from SQL Server 2008, the extensive .NET-like error handling capabilities were added. They included `TRY…​CATCH` blocks, `THROW` statements, the `FORMATMESSAGE` function, and a set of system functions that return metadata for the current error condition.

### TRY…​CATCH Blocks


 `TRY…​CATCH` blocks implement error handling similar to Microsoft Visual C\$1 and Microsoft Visual C\$1\$1. `TRY …​ END TRY` statement blocks can contain T-SQL statements.

If an error is raised by any of the statements within the `TRY …​ END TRY` block, the run stops and is moved to the nearest set of statements that are bounded by a `CATCH …​ END CATCH` block.

```
BEGIN TRY
<Set of SQL Statements>
END TRY
BEGIN CATCH
<Set of SQL Error Handling Statements>
END CATCH
```

### THROW


The `THROW` statement raises an exception and transfers run of the `TRY …​ END TRY` block of statements to the associated `CATCH …​ END CATCH` block of statements.

Throw accepts either constant literals or variables for all parameters.

```
THROW [Error Number>, <Error Message>, < Error State>] [;]
```

### Examples


The following example uses `TRY…​CATCH` error blocks to handle key violations.

```
CREATE TABLE ErrorTest (Col1 INT NOT NULL PRIMARY KEY);
```

```
BEGIN TRY
  BEGIN TRANSACTION
    INSERT INTO ErrorTest(Col1) VALUES(1);
    INSERT INTO ErrorTest(Col1) VALUES(2);
    INSERT INTO ErrorTest(Col1) VALUES(1);
  COMMIT TRANSACTION;
END TRY
BEGIN CATCH
  THROW; -- Throw with no parameters = RETHROW
END CATCH;
```

```
(1 row affected)
(1 row affected)
(0 rows affected)
Msg 2627, Level 14, State 1, Line 7
Violation of PRIMARY KEY constraint 'PK__ErrorTes__A259EE54D8676973'.
Can't insert duplicate key in object 'dbo.ErrorTest'. The duplicate key value is (1).
```

**Note**  
Contrary to what many SQL developers believe, the values 1 and 2 are indeed inserted into `ErrorTestTable` in the preceding example. This behavior is in accordance with ANSI specifications stating that a constraint violation should not roll back an entire transaction.

The following example uses `THROW` with variables.

```
BEGIN TRY
BEGIN TRANSACTION
INSERT INTO ErrorTest(Col1) VALUES(1);
INSERT INTO ErrorTest(Col1) VALUES(2);
INSERT INTO ErrorTest(Col1) VALUES(1);
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
DECLARE @CustomMessage VARCHAR(1000),
  @CustomError INT,
  @CustomState INT;
SET @CustomMessage = 'My Custom Text ' + ERROR_MESSAGE();
SET @CustomError = 54321;
SET @CustomState = 1;
THROW @CustomError, @CustomMessage, @CustomState;
END CATCH;
```

```
(0 rows affected)
Msg 54321, Level 16, State 1, Line 19
My Custom Text Violation of PRIMARY KEY constraint 'PK__ErrorTes__A259EE545CBDBB9A'.
Can't insert duplicate key in object 'dbo.ErrorTest'. The duplicate key value is (1).
```

### RAISERROR


The `RAISERROR` statement is used to explicitly raise an error message, similar to `THROW`. It causes an error state for the run session and forwards run to either the calling scope or, if the error occurred within a `TRY …​ END TRY` block, to the associated `CATCH …​ END CATCH` block. `RAISERROR` can reference a user-defined message stored in the `sys.messages` system table or can be used with dynamic message text.

The key differences between `THROW` and `RAISERROR` are:
+ Message IDs passed to `RAISERROR` must exist in the sys.messages system table. The error number parameter passed to THROW doesn’t.
+  `RAISERROR` message text may contain `printf` formatting styles. The message text of `THROW` may not.
+  `RAISERROR` uses the severity parameter for the error returned. For `THROW`, severity is always 16.

```
RAISERROR (<Message ID>|<Message Text>, <Message Severity>, <Message State>
[WITH option [<Option List>]])
```

The following example raises a custom error.

```
RAISERROR (N'This is a custom error message with severity 10 and state 1.', 10, 1)
```

### FORMATMESSAGE


 `FORMATMESSAGE` returns a sting message consisting of an existing error message in the `sys.messages` system table, or from a text string, using the optional parameter list replacements. The `FORMATMESSAGE` statement is similar to the `RAISERROR` statement.

```
FORMATMESSAGE (<Message Number> | <Message String>, <Parameter List>)
```

### Error State Functions


SQL Server provides the following error state functions:
+ ERROR\$1LINE
+ ERROR\$1MESSAGE
+ ERROR\$1NUMBER
+ ERROR\$1PROCEDURE
+ ERROR\$1SEVERITY
+ ERROR\$1STATE
+ @@ERROR

The following example uses error state functions within a `CATCH` block.

```
CREATE TABLE ErrorTest (Col1 INT NOT NULL PRIMARY KEY);
```

```
BEGIN TRY;
  BEGIN TRANSACTION;
    INSERT INTO ErrorTest(Col1) VALUES(1);
    INSERT INTO ErrorTest(Col1) VALUES(2);
    INSERT INTO ErrorTest(Col1) VALUES(1);
  COMMIT TRANSACTION;
END TRY
BEGIN CATCH
  SELECT ERROR_LINE(),
    ERROR_MESSAGE(),
    ERROR_NUMBER(),
    ERROR_PROCEDURE(),
    ERROR_SEVERITY(),
    ERROR_STATE(),
    @@Error;
THROW;
END CATCH;
```

```
6
Violation of PRIMARY KEY constraint 'PK__ErrorTes__A259EE543C8912D8'. Can't insert
duplicate key in object 'dbo.ErrorTest'. The duplicate key value is (1).
2627
NULL
14
1
2627
```

```
(1 row affected)
(1 row affected)
(0 rows affected)
(1 row affected)
Msg 2627, Level 14, State 1, Line 25
Violation of PRIMARY KEY constraint 'PK__ErrorTes__A259EE543C8912D8'. Can't insert
duplicate key in object 'dbo.ErrorTest'. The duplicate key value is (1).
```

For more information, see [RAISERROR (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/language-elements/raiserror-transact-sql?view=sql-server-ver15), [TRY…​CATCH (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/language-elements/try-catch-transact-sql?view=sql-server-ver15), and [THROW (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/language-elements/throw-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


 Amazon Aurora PostgreSQL-Compatible Edition (Aurora PostgreSQL) doesn’t provide native replacement for SQL Server error handling features and options, but it has many comparable options.

To trap the errors, use the `BEGIN.. EXCEPTION.. END`. By default, any error raised in a PL/pgSQL function block stops running and the surrounding transaction. You can trap and recover from errors using a `BEGIN` block with an `EXCEPTION` clause. The syntax is an extension to the normal syntax for a `BEGIN` block.

### Syntax


```
[ <<label>> ]
[ DECLARE
  declarations ]
BEGIN
  statements
EXCEPTION
  WHEN condition [ OR condition ... ] THEN
    handler_statements
  [ WHEN condition [ OR condition ... ] THEN
    handler_statements
  ... ]
END;
```

For the preceding example, condition is related to the error or the code. For example:
+  `WHEN interval_field_overflow THEN…​` 
+  `WHEN SQLSTATE '22015' THEN…​` 

For all error codes, see [PostgreSQL Error Codes](https://www.postgresql.org/docs/13/errcodes-appendix.html) in the *PostgreSQL documentation*.

### Throw errors


You can use the PostgreSQL `RAISE` statement to throw errors. You can combine `RAISE` with several levels of severity including:


| Severity | Usage | 
| --- | --- | 
|  DEBUG1..DEBUG5  |  Provides successively more detailed information for use by developers.  | 
|  INFO  |  Provides information implicitly requested by the user.  | 
|  NOTICE  |  Provides information that might be helpful to users.  | 
|  WARNING  |  Provides warnings of likely problems.  | 
|  ERROR  |  Reports an error that caused the current command to abort.  | 
|  LOG  |  Reports information of interest to administrators. For example, checkpoint activity.  | 
|  FATAL  |  Reports an error that caused the current session to abort.  | 
|  PANIC  |  Reports an error that caused all database sessions to abort.  | 

### Examples


The following example uses `RAISE DEBUG`, where `DEBUG` is the configurable severity level.

```
SET CLIENT_MIN_MESSAGES = 'debug';

DO $$
BEGIN
RAISE DEBUG USING MESSAGE := 'hello world';
END $$;

DEBUG: hello world
DO
```

The following example uses the `client_min_messages` parameter to control the level of messages sent to the client. The default is `NOTICE`. Use the `log_min_messages` parameter to control which message levels are written to the server log. The default is `WARNING`.

```
SET CLIENT_MIN_MESSAGES = 'debug';
```

The following example uses `EXCEPTION..WHEN…​THEN` inside `BEGIN` and `END` block to handle dividing by zero violations.

```
CREATE TABLE ErrorTest (Col1 INT NOT NULL PRIMARY KEY);
```

```
INSERT INTO employee values ('John',10);
BEGIN
  SELECT 5/0;
EXCEPTION
  WHEN division_by_zero THEN
    RAISE NOTICE 'caught division_by_zero';
  return 0;
END;
```

## Summary


The following table identifies similarities, differences, and key migration considerations.


| SQL Server error handling feature |  Aurora PostgreSQL equivalent | 
| --- | --- | 
|   `TRY …​ END TRY` and `CATCH …​ END CATCH` blocks  |  <pre>Inner<br />BEGIN<br />...<br />EXCEPTION WHEN ... THEN<br />END</pre>  | 
|   `THROW` and `RAISERROR`   |   `RAISE`   | 
|   `FORMATMESSAGE`   |   `RAISE [ level ] 'format'` or `ASSERT`   | 
|  Error state functions  |   `GET STACKED DIAGNOSTICS`   | 
|  Proprietary error messages in `sys.messages` system table  |  RAISE  | 

For more information, see [Error Handling](https://www.postgresql.org/docs/13/ecpg-errors.html), [Errors and Messages](https://www.postgresql.org/docs/13/plpgsql-errors-and-messages.html), and [When to Log](https://www.postgresql.org/docs/13/runtime-config-logging.html#GUC-LOG-MIN-MESSAGES) in the *PostgreSQL documentation*.

# Flow control for T-SQL


This topic provides reference information comparing flow control constructs between Microsoft SQL Server and Amazon Aurora PostgreSQL. You can use this information to understand the similarities and differences in flow control mechanisms when migrating from SQL Server to Aurora PostgreSQL. The topic outlines various flow control commands available in both systems, highlighting where direct equivalents exist and suggesting alternatives where they don’t.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Four star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-4.png)   |   ![\[Three star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-3.png)   |   [Flow Control](chap-sql-server-aurora-pg.tools.actioncode.md#chap-sql-server-aurora-pg.tools.actioncode.flowcontrol)   |  PostgreSQL doesn’t support `GOTO` and `WAITFOR TIME`.  | 

## SQL Server Usage


Although SQL Server is a mostly declarative language, it does support flow control commands, which provide run time dynamic changes in script run paths.

Before SQL/PSM was introduced in SQL:1999, the ANSI standard didn’t include flow control constructs. Therefore, there are significant syntax differences among RDBMS engines.

SQL Server provides the following flow control keywords.
+  `BEGIN…​ END` — Define boundaries for a block of commands that are run together.
+  `RETURN` — Exit a server code module (stored procedure, function, and so on) and return control to the calling scope. You can use `RETURN <value>` to return an `INT` value to the calling scope.
+  `BREAK` — Exit `WHILE` loop run.
+  `THROW` — Raise errors and potentially return control to the calling stack.
+  `CONTINUE` — Restart a `WHILE` loop.
+  `TRY…​ CATCH` — Error handling. For more information, see [Error Handling](chap-sql-server-aurora-pg.tsql.errorhandling.md).
+  `GOTO label` — Moves the run point to the location of the specified label.
+  `WAITFOR` — Delay.
+  `IF…​ ELSE` — Conditional flow control.
+  `WHILE <condition>` — Continue looping while <condition> returns TRUE.

**Note**  
WHILE loops are commonly used with cursors and use the system variable `@@FETCH_STATUS` to determine when to exit. For more information, see [Cursors](chap-sql-server-aurora-pg.tsql.cursors.md).

### Examples


The following example demonstrates a solution for running different processes based on the number of items in an order.

Create and populate an OrderItems table.

```
CREATE TABLE OrderItems
(
OrderID INT NOT NULL,
Item VARCHAR(20) NOT NULL,
Quantity SMALLINT NOT NULL,
PRIMARY KEY(OrderID, Item)
);
```

```
INSERT INTO OrderItems (OrderID, Item, Quantity)
VALUES
(1, 'M8 Bolt', 100),
(2, 'M8 Nut', 100),
(3, 'M8 Washer', 200);
```

Declare a cursor for looping through all OrderItems and calculating the total quantity for each order.

```
DECLARE OrderItemCursor CURSOR FAST_FORWARD
FOR
SELECT OrderID,
  SUM(Quantity) AS NumItems
FROM OrderItems
GROUP BY OrderID
ORDER BY OrderID;

DECLARE @OrderID INT, @NumItems INT;

-- Instantiate the cursor and loop through all orders.
OPEN OrderItemCursor;

FETCH NEXT FROM OrderItemCursor
INTO @OrderID, @NumItems

WHILE @@Fetch_Status = 0
BEGIN;
  IF @NumItems > 100
    PRINT 'EXECUTING LogLargeOrder - '
    + CAST(@OrderID AS VARCHAR(5))
    + ' ' + CAST(@NumItems AS VARCHAR(5));
  ELSE
    PRINT 'EXECUTING LogSmallOrder - '
    + CAST(@OrderID AS VARCHAR(5))
    + ' ' + CAST(@NumItems AS VARCHAR(5));

FETCH NEXT FROM OrderItemCursor
INTO @OrderID, @NumItems;
END;

-- Close and deallocate the cursor.
CLOSE OrderItemCursor;
DEALLOCATE OrderItemCursor;
```

For the preceding example, the result looks as shown following.

```
EXECUTING LogSmallOrder - 1 100
EXECUTING LogSmallOrder - 2 100
EXECUTING LogLargeOrder - 3 200
```

For more information, see [Control-of-Flow](https://docs.microsoft.com/en-us/sql/t-sql/language-elements/control-of-flow?view=sql-serverver15) in the *SQL Server documentation*.

## PostgreSQL Usage


 Amazon Aurora PostgreSQL-Compatible Edition (Aurora PostgreSQL) provides the following flow control constructs:
+  `BEGIN…​ END` — Define boundaries for a block of commands that are run together.
+  `CASE` — Run a set of commands based on a predicate (not to be confused with CASE expressions).
+  `IF…​ ELSE` — Perform conditional flow control.
+  `ITERATE` — Restart a `LOOP` or `WHILE` statement.
+  `LEAVE` — Exit a server code module such as stored procedure, function, and so on and return control to the calling scope.
+  `LOOP` — Loop indefinitely.
+  `REPEAT…​ UNTIL` — Loop until the predicate is true.
+  `RETURN` — Terminate the run of the current scope and return to the calling scope.
+  `WHILE` — Continue looping while the condition returns TRUE.

### Examples


The following example demonstrates a solution for running different logic based on the number of items in an order. It provides the same functionality as the example for SQL Server flow control. However, unlike the SQL Server example ran as a batch script, Aurora PostgreSQL variables can only be used in stored routines such as procedures and functions.

Create and populate an OrderItems table.

```
CREATE TABLE OrderItems
(
  OrderID INT NOT NULL,
  Item VARCHAR(20) NOT NULL,
  Quantity SMALLINT NOT NULL,
  PRIMARY KEY(OrderID, Item)
);
```

```
INSERT INTO OrderItems (OrderID, Item, Quantity)
VALUES
(1, 'M8 Bolt', 100),
(2, 'M8 Nut', 100),
(3, 'M8 Washer', 200);
```

Create a procedure to declare a cursor and loop through the order items.

```
CREATE OR REPLACE FUNCTION P()
  RETURNS numeric
  LANGUAGE plpgsql
AS $function$
DECLARE
  done int default false;
  var_OrderID int;
  var_NumItems int;
  OrderItemCursor CURSOR FOR SELECT OrderID, SUM(Quantity) AS NumItems
  FROM OrderItems
  GROUP BY OrderID
  ORDER BY OrderID;

  BEGIN
    OPEN OrderItemCursor;
    LOOP
      fetch from OrderItemCursor INTO var_OrderID, var_NumItems;
    EXIT WHEN NOT FOUND;
    IF var_NumItems > 100 THEN
      RAISE NOTICE 'EXECUTING LogLargeOrder - %s',var_OrderID;
      RAISE NOTICE 'Num Items: %s', var_NumItems;
    ELSE
      RAISE NOTICE 'EXECUTING LogSmallOrder - %s',var_OrderID;
      RAISE NOTICE 'Num Items: %s', var_NumItems;
    END IF;
    END LOOP;
done = TRUE;
CLOSE OrderItemCursor;
END; $function$
```

## Summary


While there are some syntax differences between SQL Server and Aurora PostgreSQL flow control statements, most rewrites should be straightforward. The following table summarizes the differences and identifies how to modify T-SQL code to support similar functionality in Aurora PostgreSQL PL/pgSQL.


| Command | SQL Server |  Aurora PostgreSQL  | 
| --- | --- | --- | 
|   `BEGIN…​END`   |  Define command block boundaries.  |  Define command block boundaries.  | 
|   `RETURN`   |  Exit the current scope and return to caller. Supported for both scripts and stored code such as procedures and functions.  |  Exit a stored function and return to caller.  | 
|   `BREAK`   |  Exit `WHILE` loop run  |   `EXIT WHEN`   | 
|   `THROW`   |  Raise errors and potentially return control to the calling stack.  |  Raise errors and potentially return control to the calling stack.  | 
|   `TRY…​CATCH`   |  Error handling.  |  Error handling. For more information, see [Error Handling](chap-sql-server-aurora-pg.tsql.errorhandling.md).  | 
|   `GOTO`   |  Move run to a specified label  |  Consider rewriting the flow logic using either `CASE` statements or nested stored procedures. You can use nested stored procedures to circumvent this limitation by separating code sections and encapsulating them in sub-procedures. Use `IF <condition> EXEC <stored procedure>` instead of `GOTO`.  | 
|   `WAITFOR`   |  Delay  |   `pg_sleep`. For more information, see [Date/Time Functions and Operators](https://www.postgresql.org/docs/13/static/functions-datetime.html) in the *PostgreSQL documentation*.  | 
|   `IF…​ ELSE`   |  Conditional flow control.  |  Conditional flow control.  | 
|   `WHILE`   |  Continue running while condition is true.  |  Continue running while condition is true.  | 

For more information, see [Control Structures](https://www.postgresql.org/docs/13/plpgsql-control-structures.html) in the *PostgreSQL documentation*.

# Full-text search for T-SQL


This topic provides reference information about full-text search capabilities in Microsoft SQL Server and PostgreSQL, which is relevant to migrating from SQL Server 2019 to Amazon Aurora PostgreSQL. It explains the differences in how these database systems implement full-text search functionality, including index creation, query syntax, and performance optimization techniques.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Two star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-2.png)   |   ![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-0.png)   |   [Full-Text Search](chap-sql-server-aurora-pg.tools.actioncode.md#chap-sql-server-aurora-pg.tools.actioncode.fulltextsearch)   |  Different paradigm and syntax require rewriting the application.  | 

## SQL Server Usage


SQL Server supports an optional framework for running full-text search queries against character-based data in SQL Server tables using an integrated, in-process full-text engine and a `fdhost.exe` filter daemon host process.

To run full-text queries, create a full-text catalog. This catalog in turn may contain one or more full-text indexes. A full-text index is comprised of one or more textual columns of a table.

Full-text queries perform smart linguistic searches against full-text indexes by identifying words and phrases based on specific language rules. The searches can be for simple words, complex phrases, or multiple forms of a word or a phrase. They can return ranking scores for matches or hits.

### Full-Text Indexes


You can create a full-text index on one of more columns of a table or view for any of the following data types:
+  `CHAR` — Fixed size ASCII string column data type.
+  `VARCHAR` — Variable size ASCII string column data type.
+  `NCHAR` — Fixed size UNICODE string column data type.
+  `NVARCHAR` — Variable size UNICODE string column data type.
+  `TEXT` — ASCII BLOB string column data type. This data type is deprecated.
+  `NTEXT` — UNICODE BLOB string column data type. This data type is deprecated.
+  `IMAGE` — Binary BLOB data type. This data type is deprecated.
+  `XML` — XML structured BLOB data type.
+  `VARBINARY(MAX)` — Binary BLOB data type.
+  `FILESTREAM` — File-based storage data type.

For more information, see [Data Types](chap-sql-server-aurora-pg.sql.datatypes.md).

You can use the `CREATE FULLTEXT INDEX` statement to create full-text indexes. A full-text index may contain up to 1024 columns from a single table or view.

When you create full-text indexes on `BINARY` type columns, you can store documents such as Microsoft Word as a binary stream and parse them correctly by the full-text engine.

### Full-Text Catalogs


Full-text indexes are contained within full-text catalog objects. A full-text catalog is a logical container for one or more full-text indexes, You can use a full-text catalog to collectively administer them as a group for tasks such as back-up, restore, refresh content, and so on.

You can use the `CREATE FULLTEXT CATALOG` statement to create full-text catalogs. A full-text catalog may contain zero or more full-text indexes and is limited in scope to a single database.

### Full-Text Queries


After you create and populate a full-text catalog and index, you can run full-text queries against these indexes to query for:
+ Simple term match for one or more words or phrases.
+ Prefix term match for words that begin with a set of characters.
+ Generational term match for inflectional forms of a word.
+ Proximity term match for words or phrases that are close to another word or phrase.
+ Thesaurus search for synonymous forms of a word.
+ Weighted term match for finding words or phrases with weighted proximity values.

Full-text queries are integrated into T-SQL and use the following predicates and functions:
+  `CONTAINS` predicate.
+  `FREETEXT` predicate.
+  `CONTAINSTABLE` table-valued function.
+  `FREETEXTTABLE` table-valued function.

**Note**  
Don’t confuse full-text functionality with the `LIKE` predicate, which is used for pattern matching only.

### Updating Full-Text Indexes


By default, full-text indexes are automatically updated when the underlying data is modified, similar to a normal B-tree or columnstore index. However, large changes to the underlying data may inflict a performance impact for the full-text indexes update because it is a resource intensive operation. In these cases, you can disable the automatic update of the catalog and update it manually, or on a schedule, to keep the catalog up to date with the underlying tables.

**Note**  
You can monitor the status of the full-text catalog by using the `FULLTEXTCATALOGPROPERTY (<Full-text Catalog Name>, 'Populatestatus')` function.

### Examples


The following example creates a product review table.

```
CREATE TABLE ProductReviews
(
  ReviewID INT NOT NULL
  IDENTITY(1,1),
  CONSTRAINT PK_ProductReviews PRIMARY KEY(ReviewID),
  ProductID INT NOT NULL
  /*REFERENCES Products(ProductID)*/,
  ReviewText VARCHAR(4000) NOT NULL,
  ReviewDate DATE NOT NULL,
  UserID INT NOT NULL
  /*REFERENCES Users(UserID)*/
);
```

```
INSERT INTO ProductReviews
( ProductID, ReviewText, ReviewDate, UserID)
VALUES
(1, 'This is a review for product 1, it is excellent and works as expected','20180701', 2),
(1, 'This is a review for product 1, it isn't that great and failed after two days','20180702', 2),
(2, 'This is a review for product 3, it has exceeded my expectations. A+++','20180710', 2);
```

The following example creates a full-text catalog for product reviews.

```
CREATE FULLTEXT CATALOG ProductFTCatalog;
```

The following example creates a full-text index for product reviews.

```
CREATE FULLTEXT INDEX
ON ProductReviews (ReviewText)
KEY INDEX PK_ProductReviews
ON ProductFTCatalog;
```

The following example queries the full-text index for reviews containing the word `excellent`.

```
SELECT *
FROM ProductReviews
WHERE CONTAINS(ReviewText, 'excellent');
```

```
ReviewID  ProductID  ReviewText                                                             ReviewDate  UserID
1         1          This is a review for product 1, it is excellent and works as expected  2018-07-01  2
```

For more information, see [Full-Text Search](https://docs.microsoft.com/en-us/previous-versions/sql/2014/relational-databases/search/full-text-search?view=sql-server-2014&viewFallbackFrom=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


You can use full-text indexes to speed up textual searches performed against textual data by using the full-text `@@` predicate.

You can create full-text indexes on almost any column data type. It depends on the operator class used when the index is created. You can query all classes from the `pg_opclass` table. Also, you can define the default values.

The default class uses index `tsvector` data types. The most common use is to create one column with text or other data type, and use triggers to convert it to a `tsvector`.

There are two index types for full-text searches: `GIN` and `GiST`.

 `GIN` is slower when building the index because it is complete and doesn’t have false positive results, but it’s faster when querying.

You can improve the `GIN` performance on creation by increasing the `maintenance_work_mem` parameter.

When you create `GIN` indexes, you can combine them with these parameters:
+  `fastupdate` puts updates on the index on a waiting list so they will occur in `VACUUM` or related scenarios. The default value is `ON`.
+  `gin_pending_list_limit`: the maximum size of a waiting list in KB. The default value is 4MB.

You can’t use `GIN` as composite index (multi columns) unless you add the `btree_gin` extension (which is supported in Amazon Aurora).

```
CREATE EXTENSION btree_gin;
CREATE INDEX reviews_idx ON reviews USING GIN (title, body);
```

### Full-Text Search Functions


 **Boolean search** 

You can use `to_tsquery()`, which accepts a list of words is checked against the normalized vector created with `to_tsvector()`. To do this, use the `@@` operator to check if `tsquery` matches `tsvector`. For example, the following statement returns `t` because the column contains the word **boy**. This search also returns `t` for **boys** but not for **boyser**.

```
SELECT to_tsvector('The quick young boy jumped over the fence')
@@ to_tsquery('boy');
```

 **Operators search** 

The following example shows how to use the `AND (&)`, `OR (|)`, and `NOT (!)` operators.

```
SELECT to_tsvector('The quick young boy jumped over the fence')
@@ to_tsquery('young & (boy | guy) & !girl');
```

 **Phase search** 

When using `to_tsquery`, you can also search for a similar term if you replace boy with boys and add the langauge to be used.

```
SELECT to_tsvector('The quick young boy jumped over the fence')
@@ to_tsquery('english', 'young & (boys | guy) & !girl');
```

Search words within a specific distance. In the following example, `-` is equal to 1. These examples return true.

```
SELECT to_tsvector('The quick young boy jumped over the fence') @@
  to_tsquery('young <-> boy'),
  to_tsvector('The quick young boy jumped over the fence') @@
  to_tsquery('quick <3> jumped');
```

### Migration Considerations


Migrating full-text indexes from SQL Server to Aurora PostgreSQL requires a full rewrite of the code that addresses creating, managing, and querying of full-text searches.

Although the Aurora PostgreSQL full-text engine is significantly less comprehensive than SQL Server, it is also much simpler to create and manage, and it is sufficiently powerful for most common, basic full-text requirements.

You can create a text search dictionary. For more information, see [CREATE TEXT SEARCH DICTIONARY](https://www.postgresql.org/docs/13/sql-createtsdictionary.html).

For more complex full-text workloads, use Amazon CloudSearch, a managed service that makes it simple and cost-effective to set up, manage, and scale an enterprise grade search solution. Amazon CloudSearch supports 34 languages and advanced search features such as highlighting, autocomplete, and geospatial search.

Currently, there is no direct tooling integration with Aurora PostgreSQL. Therefore, create a custom application to synchronize the data between Amazon RDS instances and the CloudSearch service.

For more information, see [Amazon CloudSearch](https://aws.amazon.com/cloudsearch/).

### Examples


```
CREATE TABLE ProductReviews
(
  ReviewID SERIAL PRIMARY KEY,
  ProductID INT NOT NULL
  ReviewText TEXT NOT NULL,
  ReviewDate DATE NOT NULL,
  UserID INT NOT NULL
);
```

```
INSERT INTO ProductReviews
(ProductID, ReviewText, ReviewDate, UserID)
VALUES
(1, 'This is a review for product 1, it is excellent and works as expected', '20180701', 2),
(1, 'This is a review for product 1, it isn't that great and failed after two days', '20180702', 2),
(2, 'This is a review for product 3, it has exceeded my expectations. A+++', '20180710', 2);
```

The following example creates a full-text search index.

```
CREATE INDEX gin_idx ON ProductReviews USING gin (ReviewText gin_trgm_ops);
```

You can use `gin_trgm_ops` to index a `TEXT` data type.

The following example queries the full-text index for reviews containing the word excellent.

```
SELECT * FROM ProductReviews where ReviewText @@ to_tsquery('excellent');
```

For more information, see [Full Text Search](https://www.postgresql.org/docs/13/textsearch.html) and [Additional Features](https://www.postgresql.org/docs/13/textsearch-features.html) in the *PostgreSQL documentation*.

# SQL server graph features for T-SQL


This topic provides reference information about graph database capabilities in Microsoft SQL Server 2019 and their potential migration to Amazon Aurora PostgreSQL. You can understand the fundamental concepts of graph databases, including nodes, edges, and their unique features for modeling complex relationships. The topic explores how SQL Server implements graph functionality, offering examples of creating graph tables and performing queries.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Two star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-2.png)   |   ![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-0.png)   |  N/A  |  No native support. Rewriting the application is required.  | 

## SQL Server Usage


SQL Server offers graph database capabilities to model many-to-many relationships. The graph relationships are integrated into Transact-SQL and receive the benefits of using SQL Server as the foundational database management system.

A graph database is a collection of nodes or vertices and edges or relationships. A node represents an entity. For example, a person or an organization. An edge represents a relationship between the two nodes that it connects. For example, this can be likes or friends. Both nodes and edges may have properties associated with them. Here are some features that make a graph database unique:
+ Edges or relationships are first class entities in a Graph Database and can have attributes or properties associated with them.
+ A single edge can flexibly connect multiple nodes in a Graph Database.
+ You can express pattern matching and multi-hop navigation queries easily.
+ You can express transitive closure and polymorphic queries easily.

A relational database can achieve anything a graph database can. However, a graph database makes it easier to express certain kinds of queries. Also, with specific optimizations, certain queries may perform better. Your decision to choose either a relational or graph database is based on following factors:
+ Your application has hierarchical data. You can use the `HierarchyID` data type to implement hierarchies, but it has some limitations. For example, it doesn’t allow you to store multiple parents for a node.
+ Your application has complex many-to-many relationships. As application evolves, new relationships are added.
+ You need to analyze interconnected data and relationships.

SQL Server 2017 adds new graph database capabilities for modeling graph many-to-many relationships. They include the new `CREATE TABLE` syntax for creating node and edge tables, and the keyword `MATCH` for queries. For more information, see [Graph processing with SQL Server and Azure SQL Database](https://docs.microsoft.com/en-us/sql/relational-databases/graphs/sql-graph-overview?view=sql-server-ver15).

The following example creates SQL Server graph tables.

```
CREATE TABLE Person (ID INTEGER PRIMARY KEY, Name VARCHAR(100), Age INT) AS NODE;
CREATE TABLE friends (StartDate date) AS EDGE;
```

A new `MATCH` clause is introduced to support pattern matching and multi-hop navigation through the graph. The `MATCH` function uses ASCII-art style syntax for pattern matching. The following example uses the `MATCH` function.

```
-- Find friends of John
SELECT Person2.Name
FROM Person Person1, Friends, Person Person2
WHERE MATCH(Person1-(Friends)->Person2)
AND Person1.Name = 'John';
```

SQL Server 2019 adds ability to define cascaded delete actions on an edge constraint in a graph database. Edge constraints enable users to add constraints to their edge tables, thereby enforcing specific semantics and also maintaining data integrity. For more information, see [Edge constraints](https://docs.microsoft.com/en-us/sql/relational-databases/tables/graph-edge-constraints?view=sql-server-ver15) in the *SQL Server documentation*.

In SQL Server 2019, graph tables now have support for table and index partitioning. For more information, see [Partitioned Tables and Indexes](https://docs.microsoft.com/en-us/sql/relational-databases/partitions/partitioned-tables-and-indexes?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


Currently, PostgreSQL doesn’t provide native Graph Database feature, but it is possible to implement some of them using recursive CTE queries or serializing graphs to regular relations.

# JSON and XML for T-SQL


This topic provides reference information about XML and JSON support in SQL Server and PostgreSQL, which is relevant for migrating from Microsoft SQL Server 2019 to Amazon Aurora PostgreSQL. You can understand the similarities and differences in how these database systems handle semi-structured data formats. The topic explores the native support for XML and JSON in both SQL Server and PostgreSQL, including data types, functions, and indexing capabilities.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Four star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-4.png)   |   ![\[Three star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-3.png)   |   [XML](chap-sql-server-aurora-pg.tools.actioncode.md#chap-sql-server-aurora-pg.tools.actioncode.xml)   |  Syntax and option differences, similar functionality. PostgreSQL doesn’t have a `FOR XML` clause.  | 

## SQL Server Usage


JavaScript Object Notation (JSON) and eXtensible Markup Language (XML) are the two most common types of semi-structured data documents used by a variety of data interfaces and NoSQL databases. Most REST web service APIs support JSON as their native data transfer format. XML is an older, more mature framework that is still widely used. It provides many extensions such as XQuery, name spaces, schemas, and more.

The following example is a JSON document:

```
[{
  "name": "Robert",
  "age": "28"
}, {
  "name": "James",
  "age": "71"
  "lastname": "Drapers"
}]
```

The following example is the XML counterpart of the preceding example.

```
<?xml version="1.0" encoding="UTF-16" ?>
<root>
  <Person>
    <name>Robert</name>
    <age>28</age>
  </Person>
  <Person>
    <name>James</name>
    <age>71</age>
    <lastname>Drapers</lastname>
  </Person>
</root>
```

SQL Server provides native support for both JSON and XML in the database using the familiar and convenient T-SQL interface.

### XML Data


SQL Server provides extensive native support for working with XML data including XML Data Types, XML Columns, XML Indexes, and XQuery.

#### XML Data Types and Columns


In SQL Server, you can use the following data types to store XML data:
+ The Native XML Data Type uses a BLOB structure but preserves the XML Infoset, which consists of the containment hierarchy, document order, and element/attribute values. An XML typed document may differ from the original text; white space is removed and the order of objects may change. XML Data stored as a native XML data type has the additional benefit of schema validation.
+ You can use an Annotated Schema (AXSD) to distribute XML documents to one or more tables. Hierarchical structure is maintained, but element order isn’t.
+ You can use CLOB or BLOB such as `VARCHAR(MAX)` and `VARBINARY(MAX)` to store the original XML document.

#### XML Indexes


In SQL Server, you can create `PRIMARY` and `SECONDARY` XML indexes on columns with a native XML data type. You can create secondary indexes for `PATH`, `VALUE`, or `PROPERTY`, which are helpful for various types of workload queries.

#### XQuery


SQL Server supports a subset of the W3C XQUERY language specification. You can run queries directly against XML data and use them as expressions or sets in standard T-SQL statements.

The following example uses the XQuery language specification.

```
DECLARE @XMLVar XML = '<Root><Data>My XML Data</Data></Root>';
SELECT @XMLVar.query('/Root/Data');
```

```
Result: <Data>My XML Data</Data>
```

### JSON Data


SQL Server doesn’t support a dedicated JSON data type. However, you can store JSON documents in an `NVARCHAR` column. For more information about BLOBS, see [Data Types](chap-sql-server-aurora-pg.sql.datatypes.md).

SQL Server provides a set of JSON functions. You can use these functions for the following tasks:
+ Retrieve and modify values in JSON documents.
+ Convert JSON objects to a set (table) format.
+ Use standard T-SQL queries with converted JSON objects.
+ Convert tabular results of T-SQL queries to JSON format.

The functions are:
+  `ISJSON` — Tests if a string contains a valid JSON string. Use in WHERE clause to avoid errors.
+  `JSON_VALUE` — Retrieves a scalar value from a JSON document.
+  `JSON_QUERY` — Retrieves a whole object or array from a JSON document.
+  `JSON_MODIFY` — Modifies values in a JSON document.
+  `OPENJSON` — Converts a JSON document to a `SET` that you can use in the `FROM` clause of a T-SQL query.

You can use the `FOR JSON` clause of `SELECT` queries to convert a tabular set to a JSON document.

### Examples


The following example creates a table with a native typed XML column.

```
CREATE TABLE MyTable
(
  XMLIdentifier INT NOT NULL PRIMARY KEY,
  XMLDocument XML NULL
);
```

The following example queries a JSON document.

```
DECLARE @JSONVar NVARCHAR(MAX);
SET @JSONVar = '{"Data":{"Person":[{"Name":"John"},{"Name":"Jane"},
{"Name":"Maria"}]}}';
SELECT JSON_QUERY(@JSONVar, '$.Data');
```

For more information, see [JSON data in SQL Server](https://docs.microsoft.com/en-us/sql/relational-databases/json/json-data-sql-server?view=sql-server-ver15) and [XML Data (SQL Server)](https://docs.microsoft.com/en-us/sql/relational-databases/xml/xml-data-sql-server?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


PostgreSQL provides native JSON Document support using the JSON data types `JSON` and `JSONB`.

 `JSON` stores an exact copy of the input text that processing functions must re-parse on each run. It also preserves semantically-insignificant white space between tokens and the order of keys within JSON objects.

 `JSONB` stores data in a decomposed binary format causing slightly slower input performance due to added conversion to binary overhead. But it is significantly faster to process, since no re-parsing is needed on reads.
+ Doesn’t preserve white space.
+ Doesn’t preserve the order of object keys.
+ Doesn’t keep duplicate object keys. If duplicate keys are specified in the input, only the last value is retained.

Most applications store JSON data as `JSONB` unless there are specialized needs. For more information, see [JSON Types](https://www.postgresql.org/docs/13/static/datatype-json.html) in the *PostgreSQL documentation*.

To comply with the full JSON specification, database encoding must be set to UTF8. If the database code page isn’t set to UTF8, then non-UTF8 characters are allowed and the database encoding will be non-compliant with the full JSON specification.

In PostgreSQL version 10 and higher, JSON and JSONB are compatible with full-text search.

### Examples


 **Querying JSON data in PostgreSQL uses different syntax than SQL Server** 

The following example returns the JSON document stored in the emp\$1data column associated with emp\$1id=1.

```
SELECT emp_data FROM employees WHERE emp_id = 1;
```

The following example returns all JSON documents stored in the emp\$1data column having a key named address.

```
SELECT emp_data FROM employees WHERE emp_data ? ' address';
```

The following example returns all JSON items that have an address key or a hobbies key.

```
SELECT * FROM employees WHERE emp_data ?| array['address', 'hobbies'];
```

The following example returns all JSON items that have both an address key and a hobbies key.

```
SELECT * FROM employees WHERE emp_data ?& array['a', 'b'];
```

The following example returns the value of home key in the phone numbers array.

```
SELECT emp_data ->'phone numbers'->>'home' FROM employees;
```

The following example returns all JSON documents where the address key is equal to a specified value and return all JSON documents where address key contains a specific string (using like).

```
SELECT * FROM employees WHERE emp_data->>'address' = '1234 First Street, Capital City';
SELECT * FROM employees WHERE emp_data->>'address' like '%Capital City%';
```

The following example removes keys from JSON. You can remove more than one key in PostgreSQL 10 only.

```
select '{"id":132, "fname":"John", "salary":999999, "bank_account":1234}'::jsonb - '{salary,bank_account}'::text[];
```

For more information, see [JSON Functions and Operators](https://www.postgresql.org/docs/10/functions-json.html) in the *PostgreSQL documentation*.

 **Indexing and Constraints with JSONB Columns** 

You can use the CREATE UNIQUE INDEX statement to enforce constraints on values inside JSON documents.

The following example creates a unique index that forces values of the address key to be unique.

```
CREATE UNIQUE INDEX employee_address_uq ON employees( (emp_data->>'address') ) ;
```

This index allows the first SQL insert statement to work and causes the second to fail.

```
INSERT INTO employees VALUES
(2, 'Second Employee','{ "address": "1234 Second Street, Capital City"}');
INSERT INTO employees VALUES
(3, 'Third Employee', '{ "address": "1234 Second Street, Capital City"}');
ERROR: duplicate key value violates unique constraint "employee_address_uq" SQL state:
23505 Detail: Key ((emp_data ->> 'address'::text))=(1234 Second Street, Capital City)
already exists.
```

For JSON data, PostgreSQL supports B-tree, hash, and Generalized Inverted Indexes (GIN). A GIN index is a special inverted index structure that is useful when an index must map many values to a row (such as indexing JSON documents).

When you use GIN indexes, you can efficiently and quickly query data using only the following JSON operators: `@>, ?, ?&, ?|`.

Without indexes, PostgreSQL is forced to perform a full table scan when filtering data. This condition applies to JSON data and will most likely have a negative impact on performance since Postgres has to step into each JSON document.

The following example creates an index on the address key of emp\$1data.

```
CREATE idx1_employees ON employees ((emp_data->>'address'));
```

The following example creates a GIN index on a specific key or the entire emp\$1data column.

```
CREATE INDEX idx2_employees ON cards USING gin ((emp_data->'tags'));
CREATE INDEX idx3_employees ON employees USING gin (emp_data);
```

### XML Examples


PostgreSQL provides an XML data type for table columns. The primary advantage of using XML columns, rather than placing XML data in text columns, is that the XML data is type checked when inserted. Additionally, there are support functions to perform type-safe operations.

XML can store well-formed documents as defined by the XML standard or content fragments that defined by the production XMLDecl. Content fragments can have more than one top-level element or character node.

You can use `IS DOCUMENT` to evaluate whether a particular XML value is a full document or only a content fragment.

The following example demonstrates how to create XML data and insert it into a table.

Insert a document, and then insert a content fragment. You can insert both types of XML data into the same column. If the XML is incorrect (such as a missing tag), the insert fails with the relevant error. The query retrieves only document records.

```
CREATE TABLE test (a xml);

insert into test values (XMLPARSE (DOCUMENT '<?xml version="1.0"?><Series><title>Simpsons</title><chapter>...</chapter></Series>'));

insert into test values (XMLPARSE (CONTENT 'note<tag>value</tag><tag>value</tag>'));

select * from test where a IS DOCUMENT;
```

Converting XML data to rows was a feature added in PostgreSQL 10. This can be very helpful reading XML data using a table equivalent.

```
CREATE TABLE xmldata_sample AS SELECT
xml $$
<ROWS>
  <ROW id="1">
    <EMP_ID>532</EMP_ID>
    <EMP_NAME>John</EMP_NAME>
  </ROW>
  <ROW id="5">
    <EMP_ID>234</EMP_ID>
    <EMP_NAME>Carl</EMP_NAME>
    <EMP_DEP>6</EMP_DEP>
    <SALARY unit="dollars">10000</SALARY>
  </ROW>
  <ROW id="6">
    <EMP_ID>123</EMP_ID>
    <EMP_DEP>8</EMP_DEP>
    <SALARY unit="dollars">5000</SALARY>
  </ROW>
</ROWS>
$$ AS data;

SELECT xmltable.*
  FROM xmldata_sample,
    XMLTABLE('//ROWS/ROW'
      PASSING data
      COLUMNS id int PATH '@id',
        ordinality FOR ORDINALITY,
        "EMP_NAME" text,
        "EMP_ID" text PATH 'EMP_ID',
        SALARY_USD float PATH 'SALARY[@unit = "dollars"]',
        MANAGER_NAME text PATH 'MANAGER_NAME' DEFAULT 'not specified');

id  ordinality  EMP_NAME  EMP_ID  salary_usd  manager_name
1   1           John      532                 not specified
5   2           Carl      234     10000       not specified
6   3                     123     5000        not specified
```

## Summary


The following table identifies similarities, differences, and key migration considerations.


| Feature | SQL Server |  Aurora PostgreSQL  | 
| --- | --- | --- | 
|  XML and JSON native data types.  |  XML with schema collections.  |  JSON.  | 
|  JSON functions.  |   `IS_JSON`, `JSON_VALUE`, `JSON_QUERY`, `JSON_MODFIY`, `OPEN_JSON`, `FOR JSON`.  |  A set of more than 20 dedicated JSON functions. For more information, see [JSON Functions and Operators](https://www.postgresql.org/docs/13/functions-json.html) in the *PostgreSQL documentation*.  | 
|  XML functions  |   `XQUERY` and `XPATH`, `OPEN_XML`, `FOR XML`.  |  Many XML functions. For more information, see [XML Functions](https://www.postgresql.org/docs/13/functions-xml.html) in the *PostgreSQL documentation*. PostgreSQL doesn’t have a `FOR XML` clause. You can use `string_agg` instead.  | 
|  XML and JSON indexes.  |  Primary and Secondary `PATH`, `VALUE` and `PROPERTY` indexes.  |  Supported.  | 

For more information, see [XML Type](https://www.postgresql.org/docs/13/datatype-xml.html), [XML Functions](https://www.postgresql.org/docs/13/functions-xml.html), [JSON Types](https://www.postgresql.org/docs/13/datatype-json.html), and [JSON Functions and Operators](https://www.postgresql.org/docs/13/functions-json.html) in the *PostgreSQL documentation*.

# Merge for T-SQL


This topic contains reference information comparing the MERGE statement in SQL Server with equivalent functionality in PostgreSQL. You can understand the differences in feature compatibility between these database systems when migrating from Microsoft SQL Server 2019 to Amazon Aurora PostgreSQL.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-3.png)   |   ![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-0.png)   |   [MERGE](chap-sql-server-aurora-pg.tools.actioncode.md#chap-sql-server-aurora-pg.tools.actioncode.merge)   |  Rewrite to use `INSERT…​ ON CONFLICT`.  | 

## SQL Server Usage


 `MERGE` is a complex , hybrid DML/DQL statement for performing `INSERT`, `UPDATE`, or `DELETE` operations on a target table based on the results of a logical join of the target table and a source data set.

 `MERGE` can also return row sets similar to SELECT using the OUTPUT clause, which gives the calling scope access to the actual data modifications of the `MERGE` statement.

The `MERGE` statement is most efficient for non-trivial conditional DML. For example, inserting data if a row key value doesn’t exist and updating the existing row if the key value already exists.

You can easily manage additional logic such as deleting rows from the target that don’t appear in the source. For simple, straightforward updates of data in one table based on data in another, it is typically more efficient to use simple `INSERT`, `DELETE`, and `UPDATE` statements. You can replace all `MERGE` functionality with `INSERT`, `DELETE`, and `UPDATE` statements, but not necessarily less efficiently.

The SQL Server `MERGE` statement provides a wide range of functionality and flexibility and is compatible with ANSI standard SQL:2008. SQL Server has many extensions to `MERGE` that provide efficient T-SQL solutions for synchronizing data.

### Syntax


```
MERGE [INTO] <Target Table> [AS] <Table Alias>]
USING <Source Table>
ON <Merge Predicate>
[WHEN MATCHED [AND <Predicate>]
THEN UPDATE SET <Column Assignments...> | DELETE]
[WHEN NOT MATCHED [BY TARGET] [AND <Predicate>]
THEN INSERT [(<Column List>)]
VALUES (<Values List>) | DEFAULT VALUES]
[WHEN NOT MATCHED BY SOURCE [AND <Predicate>]
THEN UPDATE SET <Column Assignments...> | DELETE]
OUTPUT [<Output Clause>]
```

### Examples


The following example performs a simple one-way synchronization of two tables.

```
CREATE TABLE SourceTable
(
  Col1 INT NOT NULL PRIMARY KEY,
  Col2 VARCHAR(20) NOT NULL
);
```

```
CREATE TABLE TargetTable
(
  Col1 INT NOT NULL PRIMARY KEY,
  Col2 VARCHAR(20) NOT NULL
);
```

```
INSERT INTO SourceTable (Col1, Col2)
VALUES
(2, 'Source2'),
(3, 'Source3'),
(4, 'Source4');
```

```
INSERT INTO TargetTable (Col1, Col2)
VALUES
(1, 'Target1'),
(2, 'Target2'),
(3, 'Target3');
```

```
MERGE INTO TargetTable AS TGT
USING SourceTable AS SRC ON TGT.Col1 = SRC.Col1
WHEN MATCHED
  THEN UPDATE SET TGT.Col2 = SRC.Col2
WHEN NOT MATCHED
  THEN INSERT (Col1, Col2)
  VALUES (SRC.Col1, SRC.Col2);
```

```
SELECT * FROM TargetTable;
```

For the preceding examples, the result looks as shown following.

```
Col1  Col2
1     Target1
2     Source2
3     Source3
4     Source4
```

Perform a conditional two-way synchronization using `NULL` for no change and `DELETE` from the target when the data isn’t found in the source.

```
TRUNCATE TABLE SourceTable;
INSERT INTO SourceTable (Col1, Col2) VALUES (3, NULL), (4, 'NewSource4'), (5,'Source5');
```

```
MERGE INTO TargetTable AS TGT
USING SourceTable AS SRC ON TGT.Col1 = SRC.Col1
WHEN MATCHED AND SRC.Col2 IS NOT NULL
  THEN UPDATE SET TGT.Col2 = SRC.Col2
WHEN NOT MATCHED
  THEN INSERT (Col1, Col2)
    VALUES (SRC.Col1, SRC.Col2)
WHEN NOT MATCHED BY SOURCE
  THEN DELETE;
```

```
SELECT *
FROM TargetTable;
```

For the preceding examples, the result looks as shown following.

```
Col1  Col2
3     Source3
4     NewSource4
5     Source5
```

For more information, see [MERGE (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/statements/merge-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


Currently, PostgreSQL version 10 doesn’t support the use of the `MERGE` command. As an alternative, consider using the `INSERT…​ ON CONFLICT` clause, which can handle cases where insert clauses might cause a conflict, and then redirect the operation as an update.

### Examples


The following example uses the `ON ONFLICT` clause.

```
CREATE TABLE EMP_BONUS (
EMPLOYEE_ID NUMERIC,
BONUS_YEAR VARCHAR(4),
SALARY NUMERIC,
BONUS NUMERIC,
PRIMARY KEY (EMPLOYEE_ID, BONUS_YEAR));

INSERT INTO EMP_BONUS (EMPLOYEE_ID, BONUS_YEAR, SALARY)
  SELECT EMPLOYEE_ID, EXTRACT(YEAR FROM NOW()), SALARY
  FROM EMPLOYEES
  WHERE SALARY < 10000
  ON CONFLICT (EMPLOYEE_ID, BONUS_YEAR)
  DO UPDATE SET BONUS = EMP_BONUS.SALARY * 0.5;
  SELECT * FROM EMP_BONUS;

employee_id  bonus_year  salary   bonus
103          2017        9000.00  4500.000
104          2017        6000.00  3000.000
105          2017        4800.00  2400.000
106          2017        4800.00  2400.000
107          2017        4200.00  2100.000
109          2017        9000.00  4500.000
110          2017        8200.00  4100.000
111          2017        7700.00  3850.000
112          2017        7800.00  3900.000
113          2017        6900.00  3450.000
115          2017        3100.00  1550.000
116          2017        2900.00  1450.000
117          2017        2800.00  1400.000
118          2017        2600.00  1300.000
```

Running the same operation multiple times using the `ON CONFLICT` clause doesn’t generate an error because the existing records are redirected to the update clause.

For more information, see [INSERT](https://www.postgresql.org/docs/13/sql-insert.html) and [Unsupported Features](https://www.postgresql.org/docs/13/unsupported-features-sql-standard.htm) in the *PostgreSQL documentation*.

# Pivot and unpivot for T-SQL


This topic provides reference information about feature compatibility between Microsoft SQL Server 2019 and Amazon Aurora PostgreSQL, specifically regarding the PIVOT and UNPIVOT operators. You can understand the differences in functionality and learn how to adapt your SQL queries when migrating from SQL Server to Aurora PostgreSQL.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-3.png)   |   ![\[No automation\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-0.png)   |   [PIVOT and UNPIVOT](chap-sql-server-aurora-pg.tools.actioncode.md#chap-sql-server-aurora-pg.tools.actioncode.pivot)   |  Straightforward rewrite to use traditional SQL syntax.  | 

## SQL Server Usage


 `PIVOT` and `UNPIVOT` are relational operations used to transform a set by rotating rows into columns and columns into rows.

### PIVOT


The `PIVOT` operator consists of several clauses and implied expressions.

The *anchor column* isn’t pivoted and results in a single row for each unique value, similar to `GROUP BY`.

The pivoted columns are derived from the `PIVOT` clause and are the row values transformed into columns. The values for these columns are derived from the source column defined in the `PIVOT` clause.

#### PIVOT Syntax


```
SELECT <Anchor column>,
  [Pivoted Column 1] AS <Alias>,
  [Pivoted column 2] AS <Alias>
  ...n
FROM
  (<SELECT Statement of Set to be Pivoted>)
  AS <Set Alias>
PIVOT
(
  <Aggregate Function>(<Aggregated Column>)
FOR
[<Column With the Values for the Pivoted Columns Names>]
  IN ( [Pivoted Column 1], [Pivoted column 2] ...)
) AS <Pivot Table Alias>;
```

#### PIVOT Examples


The following example creates and populates the Orders table.

```
CREATE TABLE Orders
(
  OrderID INT NOT NULL
  IDENTITY(1,1) PRIMARY KEY,
  OrderDate DATE NOT NULL,
  Customer VARCHAR(20) NOT NULL
);
```

```
INSERT INTO Orders (OrderDate, Customer)
VALUES
('20180101', 'John'),
('20180201', 'Mitch'),
('20180102', 'John'),
('20180104', 'Kevin'),
('20180104', 'Larry'),
('20180104', 'Kevin'),
('20180104', 'Kevin');
```

The following example creates a simple PIVOT for the number of orders for each day. Days of month from 5 to 31 are omitted for example simplicity.

```
SELECT 'Number of Orders for Day' AS DayOfMonth,
  [1], [2], [3], [4] /*...[31]*/
FROM (
  SELECT OrderID,
    DAY(OrderDate) AS OrderDay
  FROM Orders
  ) AS SourceSet
PIVOT
(
  COUNT(OrderID)
  FOR OrderDay IN ([1], [2], [3], [4] /*...[31]*/)
) AS PivotSet;
```

For the preceding example, the result looks as shown following.

```
DayOfMonth                1  2  3  4  /*...[31]*/
Number of Orders for Day  2  1  0  4
```

The result set is now oriented in rows against columns. The first column is the description of the columns to follow.

PIVOT for number of orders for each day, for each customer.

```
SELECT Customer,
  [1], [2], [3], [4] /*...[31]*/
FROM (
  SELECT OrderID,
    Customer,
    DAY(OrderDate) AS OrderDay
  FROM Orders
  ) AS SourceSet
PIVOT
(
  COUNT(OrderID)
  FOR OrderDay IN ([1], [2], [3], [4] /*...[31]*/)
) AS PivotSet;
```

```
Customer  1  2  3  4
John      1  1  0  0
Kevin     0  0  0  3
Larry     0  0  0  1
Mitch     1  0  0  0
```

### UNPIVOT


 `UNPIVOT` is similar to `PIVOT` in reverse, but spreads existing column values into rows.

The source set is similar to the result of the `PIVOT` with values pertaining to particular entities listed in columns. Because the result set has more rows than the source, aggregations aren’t required.

It is less commonly used than `PIVOT` because most data in relational databases have attributes in columns; not the other way around.

#### UNPIVOT Examples


The following example creates and populates the pivot-like `EmployeeSales` table. This is most likely a view or a set from an external source.

```
CREATE TABLE EmployeeSales
(
  SaleDate DATE NOT NULL PRIMARY KEY,
  John INT,
  Kevin INT,
  Mary INT
);
```

```
INSERT INTO EmployeeSales
VALUES
('20180101', 150, 0, 300),
('20180102', 0, 0, 0),
('20180103', 250, 50, 0),
('20180104', 500, 400, 100);
```

The following example unpivots employee sales for each date into individual rows for each employee.

```
SELECT SaleDate,
  Employee,
  SaleAmount
FROM
(
  SELECT SaleDate, John, Kevin, Mary
  FROM EmployeeSales
) AS SourceSet
UNPIVOT (
  SaleAmount
  FOR Employee IN (John, Kevin, Mary)
  )AS UnpivotSet;
```

For the preceding example, the result looks as shown following.

```
SaleDate    Employee  SaleAmount
2018-01-01  John      150
2018-01-01  Kevin     0
2018-01-01  Mary      300
2018-01-02  John      0
2018-01-02  Kevin     0
2018-01-02  Mary      0
2018-01-03  John      250
2018-01-03  Kevin     50
2018-01-03  Mary      0
2018-01-04  John      500
2018-01-04  Kevin     400
2018-01-04  Mary      100
```

For more information, see [FROM - Using PIVOT and UNPIVOT](https://docs.microsoft.com/en-us/sql/t-sql/queries/from-using-pivot-and-unpivot?view=sql-server-ver15&viewFallbackFrom=sqlserver-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


 Amazon Aurora PostgreSQL-Compatible Edition (Aurora PostgreSQL) doesn’t support the `PIVOT` and `UNPIVOT` relational operators.

You can rewrite the functionality of these operators to use standard SQL syntax, as shown in the following examples.

### PIVOT Examples


The following example creates and populates the Orders table.

```
CREATE TABLE Orders
(
  OrderID SERIAL PRIMARY KEY,
  OrderDate DATE NOT NULL,
  Customer VARCHAR(20) NOT NULL
);
```

```
INSERT INTO Orders (OrderDate, Customer)
VALUES
('20180101', 'John'),
('20180201', 'Mitch'),
('20180102', 'John'),
('20180104', 'Kevin'),
('20180104', 'Larry'),
('20180104', 'Kevin'),
('20180104', 'Kevin');
```

The following example creates a simple PIVOT for the number of orders for each day. Days of month from 5 to 31 are omitted for example simplicity.

```
SELECT 'Number of Orders for Day' AS DayOfMonth,
COUNT(CASE WHEN date_part('day', OrderDate) = 1 THEN 'OrderDate' ELSE NULL END) AS "1",
COUNT(CASE WHEN date_part('day', OrderDate) = 2 THEN 'OrderDate' ELSE NULL END) AS "2",
COUNT(CASE WHEN date_part('day', OrderDate) = 3 THEN 'OrderDate' ELSE NULL END) AS "3",
COUNT(CASE WHEN date_part('day', OrderDate) = 4 THEN 'OrderDate' ELSE NULL END) AS "4" /*...[31]*/
FROM Orders AS O;
```

For the preceding example, the result looks as shown following.

```
DayOfMonth                1  2  3  4  /*...[31]*/
Number of Orders for Day  2  1  0  4
```

PIVOT for number of orders for each day, for each customer.

```
SELECT Customer,
COUNT(CASE WHEN date_part('day', OrderDate) = 1 THEN 'OrderDate' ELSE NULL END) AS "1",
COUNT(CASE WHEN date_part('day', OrderDate) = 2 THEN 'OrderDate' ELSE NULL END) AS "2",
COUNT(CASE WHEN date_part('day', OrderDate) = 3 THEN 'OrderDate' ELSE NULL END) AS "3",
COUNT(CASE WHEN date_part('day', OrderDate) = 4 THEN 'OrderDate' ELSE NULL END) AS "4" /*...[31]*/
FROM Orders AS O
GROUP BY Customer;
```

For the preceding example, the result looks as shown following.

```
Customer  1  2  3  4
John      1  1  0  0
Kevin     0  0  0  3
Larry     0  0  0  1
Mitch     1  0  0  0
```

### UNPIVOT Examples


The following example creates and populates the pivot-like `EmployeeSales` table. In real life this will most likely be a view, or a set from an external source.

```
CREATE TABLE EmployeeSales
(
  SaleDate DATE NOT NULL PRIMARY KEY,
  John INT,
  Kevin INT,
  Mary INT
);
```

```
INSERT INTO EmployeeSales
VALUES
('20180101', 150, 0, 300),
('20180102', 0, 0, 0),
('20180103', 250, 50, 0),
('20180104', 500, 400, 100);
```

The following example unpivots employee sales for each date into individual rows for each employee.

```
SELECT SaleDate, Employee, SaleAmount
FROM (
  SELECT SaleDate,
    Employee,
    CASE
      WHEN Employee = 'John' THEN 'John'
      WHEN Employee = 'Kevin' THEN 'Kevin'
      WHEN Employee = 'Mary' THEN 'Mary'
    END AS SaleAmount
  FROM EmployeeSales as emp
  CROSS JOIN
  (
    SELECT 'John' AS Employee
    UNION ALL
    SELECT 'Kevin'
    UNION ALL
    SELECT 'Mary'
  ) AS Employees
) AS UnpivotedSet;
```

For the preceding example, the result looks as shown following.

```
SaleDate    Employee  SaleAmount
2018-01-01  John      150
2018-01-01  Kevin     0
2018-01-01  Mary      300
2018-01-02  John      0
2018-01-02  Kevin     0
2018-01-02  Mary      0
2018-01-03  John      250
2018-01-03  Kevin     50
2018-01-03  Mary      0
2018-01-04  John      500
2018-01-04  Kevin     400
2018-01-04  Mary      100
```

# Triggers for T-SQL


This topic provides reference information about migrating triggers from Microsoft SQL Server 2019 to Amazon Aurora PostgreSQL. It compares the trigger functionality between the two database systems, highlighting similarities and differences in syntax, scope, and usage. You’ll gain insights into how triggers work in both environments, including their types, execution phases, and management capabilities.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-4.png)   |   ![\[Three star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-3.png)   |   [Triggers](chap-sql-server-aurora-pg.tools.actioncode.md#chap-sql-server-aurora-pg.tools.actioncode.triggers)   |  Syntax and option differences, similar functionality.  | 

## SQL Server Usage


Triggers are special types of stored procedures that run automatically in response to events. They are most commonly used for Data Manipulation Language (DML).

SQL Server supports `AFTER`, `FOR`, and `INSTEAD OF` triggers, which you can create on tables and views (`AFTER` and `FOR` are synonymous). SQL Server also provides an event trigger framework at the server and database levels that includes Data Definition Language (DDL), Data Control Language (DCL), and general system events such as login.

**Note**  
SQL Server doesn’t support `FOR EACH ROW` triggers in which the trigger code is run once for each row of modified data.

### Trigger Run


 `AFTER` triggers runs after DML statements complete run. `INSTEAD OF` triggers run code in place of the original DML statement. You can create `AFTER` triggers on tables only. You can create `INSTEAD OF` triggers on tables and views.

You can create only one `INSTEAD OF` trigger for any given object and event. When multiple `AFTER` triggers exist for the same event and object, you can partially set the trigger order by using the `sp_settriggerorder` system stored procedure. You can use it to set the first and last triggers to be run, but not the order of others.

### Trigger Scope


SQL Server supports statement level triggers only. The trigger code runs once for each statement. The data modified by the DML statement is available to the trigger scope and is saved in two virtual tables: `INSERTED` and `DELETED`. These tables contain the entire set of changes performed by the DML statement that caused trigger run.

SQL Server triggers always run within the transaction of the statement that triggered the run. If the trigger code issues an explicit `ROLLBACK`, or causes an exception that mandates a rollback, the DML statement is also rolled back. For `INSTEAD OF` triggers, the DML statement doesn’t run and doesn’t require a rollback.

### Examples


 **Use a DML trigger to audit invoice deletions** 

The following examples demonstrate how to use a trigger to log rows deleted from a table.

Create and populate the `Invoices` table.

```
CREATE TABLE Invoices
(
  InvoiceID INT NOT NULL PRIMARY KEY,
  Customer VARCHAR(20) NOT NULL,
  TotalAmount DECIMAL(9,2) NOT NULL
);

INSERT INTO Invoices (InvoiceID,Customer,TotalAmount)
VALUES
(1, 'John', 1400.23),
(2, 'Jeff', 245.00),
(3, 'James', 677.22);
```

Create the `InvoiceAuditLog` table.

```
CREATE TABLE InvoiceAuditLog
(
  InvoiceID INT NOT NULL PRIMARY KEY,
  Customer VARCHAR(20) NOT NULL,
  TotalAmount DECIMAL(9,2) NOT NULL,
  DeleteDate DATETIME NOT NULL DEFAULT (GETDATE()),
  DeletedBy VARCHAR(128) NOT NULL DEFAULT (CURRENT_USER)
);
```

Create an `AFTER DELETE` trigger to log deletions from the `Invoices` table to the audit log.

```
CREATE TRIGGER LogInvoiceDeletes
ON Invoices
AFTER DELETE
AS
BEGIN
INSERT INTO InvoiceAuditLog (InvoiceID, Customer, TotalAmount)
SELECT InvoiceID,
  Customer,
  TotalAmount
FROM Deleted
END;
```

Delete an invoice.

```
DELETE FROM Invoices
WHERE InvoiceID = 3;
```

Query the content of both tables.

```
SELECT *
FROM Invoices AS I
  FULL OUTER JOIN
  InvoiceAuditLog AS IAG
  ON I.InvoiceID = IAG.InvoiceID;
```

For the preceding example, the result looks as shown following.

```
InvoiceID  Customer  TotalAmount  InvoiceID  Customer  TotalAmount  DeleteDate      DeletedBy
1          John      1400.23      NULL       NULL      NULL         NULL            NULL
2          Jeff      245.00       NULL       NULL      NULL         NULL            NULL
NULL       NULL      NULL         3          James     677.22       20180224 13:02  Domain/JohnCortney
```

 **Create a DDL trigger** 

Create a trigger to protect all tables in the database from accidental deletion.

```
CREATE TRIGGER PreventTableDrop
ON DATABASE FOR DROP_TABLE
AS
BEGIN
  RAISERROR ('Tables can't be dropped in this database', 16, 1)
  ROLLBACK TRANSACTION
END;
```

Test the trigger by attempting to drop a table.

```
DROP TABLE [Invoices];
  GO
```

The system displays the following message explaining that the Invoices table can’t be dropped:

```
Msg 50000, Level 16, State 1, Procedure PreventTableDrop, Line 5 [Batch Start Line 56]
Tables Can't be dropped in this database.
Msg 3609, Level 16, State 2, Line 57
The transaction ended in the trigger. The batch has been aborted.
```

For more information, see [DML Triggers](https://docs.microsoft.com/en-us/sql/relational-databases/triggers/dml-triggers?view=sql-server-ver15) and [DDL Triggers](https://docs.microsoft.com/en-us/sql/relational-databases/triggers/ddl-triggers?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


Triggers provide much of the same functionality as SQL Server:
+ DML triggers run based on table related events, such as DML.
+ Event triggers run after certain database events, such as running DDL commands.

Unlike SQL Server triggers, PostgreSQL triggers must call a function. They don’t support anonymous blocks of PL/pgSQL code as part of the trigger body. The user-supplied function is declared with no arguments and has a return type of trigger.

### PostgreSQL DML Triggers


PostgreSQL triggers can be fired BEFORE or AFTER a DML operation.
+ They run before the operation is attempted on a row.
  + Before constraints are checked and the INSERT, UPDATE, or DELETE is attempted.
  + If the trigger runs before or instead of the event, the trigger can skip the operation for the current row or change the row being inserted (for INSERT and UPDATE operations only).
+ Triggers can run after the operation was completed, after constraints are checked, and the `INSERT`, `UPDATE`, or `DELETE` command completed. If the trigger runs after the event, all changes, including the effects of other triggers, are visible to the trigger.

PostgreSQL triggers can run `INSTEAD OF` a DML command when created on views.

PostgreSQL triggers can run `FOR EACH ROW` affected by the DML statement or `FOR EACH STATEMENT` running only once as part of a DML statement.


| When fired | Database event | Row-Level trigger (FOR EACH ROW) | Statement-level trigger (FOR EACH STATEMENT) | 
| --- | --- | --- | --- | 
|  BEFORE  |  INSERT, UPDATE, DELETE  |  Tables and foreign tables  |  Tables, views, and foreign tables  | 
|  BEFORE  |  TRUNCATE  |  —  |  Tables  | 
|  AFTER  |  INSERT, UPDATE, DELETE  |  Tables and foreign tables  |  Tables, views, and foreign tables  | 
|  AFTER  |  TRUNCATE  |  —  |  Tables  | 
|  INSTEAD OF  |  INSERT, UPDATE, DELETE  |  Views  |  —  | 
|  INSTEAD OF  |  TRUNCATE  |  —  |  —  | 

### PostgreSQL Event Triggers


An event trigger runs when a specific event associated with the trigger occurs in the database. Supported events include `ddl_command_start`, `ddl_command_end`, `table_rewrite`, and `sql_drop`.
+  `ddl_command_start` occurs before the run of a `CREATE`, `ALTER`, `DROP`, `SECURITY LABEL`, `COMMENT`, `GRANT`, `REVOKE`, or `SELECT INTO` command.
+  `ddl_command_end` occurs after the command completed and before the transaction commits.
+  `sql_drop` runs only for the `DROP` DDL command, before the `ddl_command_end` trigger runs.

For a full list of supported PostgreSQL event trigger types, see [Event Trigger Firing Matrix](https://www.postgresql.org/docs/10/event-trigger-matrix.html) in the *PostgreSQL documentation*.

### PostgreSQL CREATE TRIGGER Synopsis


```
CREATE [ CONSTRAINT ] TRIGGER name { BEFORE | AFTER | INSTEAD OF } { event [ OR ... ]}
  ON table_name
  [ FROM referenced_table_name ]
  [ NOT DEFERRABLE | [ DEFERRABLE ] [ INITIALLY IMMEDIATE | INITIALLY DEFERRED ] ]
  [ REFERENCING { { OLD | NEW } TABLE [ AS ] transition_relation_name } [ ... ] ]
  [ FOR [ EACH ] { ROW | STATEMENT } ]
  [ WHEN ( condition ) ]
  EXECUTE PROCEDURE function_name ( arguments )

where event can be one of:

  INSERT
  UPDATE [ OF column_name [, ... ] ]
  DELETE
  TRUNCATE
```

**Note**  
 `REFERENCING` is a new option since PostgreSQL 10. You can use it with `AFTER` trigger to interact with the overall view of the `OLD` or the `NEW TABLE` changed rows.

### Examples


 **Create a trigger** 

Create a trigger function that stores the run logic (this is the same as a SQL Server DML trigger).

```
CREATE OR REPLACE FUNCTION PROJECTS_SET_NULL()
  RETURNS TRIGGER
  AS $$
  BEGIN
IF TG_OP = 'UPDATE' AND OLD.PROJECTNO != NEW.PROJECTNO OR
  TG_OP = 'DELETE' THEN
UPDATE EMP
  SET PROJECTNO = NULL
  WHERE EMP.PROJECTNO = OLD.PROJECTNO;
  END IF;
  IF TG_OP = 'UPDATE' THEN RETURN NULL;
    ELSIF TG_OP = 'DELETE' THEN RETURN NULL;
  END IF;
END; $$
LANGUAGE PLPGSQL;

CREATE FUNCTION
```

Create the trigger.

```
CREATE TRIGGER TRG_PROJECTS_SET_NULL
AFTER UPDATE OF PROJECTNO OR DELETE
ON PROJECTS
FOR EACH ROW
EXECUTE PROCEDURE PROJECTS_SET_NULL();

CREATE TRIGGER
```

Test the trigger by deleting a row from the `PROJECTS` table.

```
DELETE FROM PROJECTS WHERE PROJECTNO=123;
SELECT PROJECTNO FROM EMP WHERE PROJECTNO=123;

projectno
(0 rows)
```

 **Create a trigger** 

Create an event trigger function. This is the same as a SQL Server DDL System/Schema level trigger, such as a trigger that prevents running a DDL DROP on objects in the HR schema.

Note that trigger functions are created with no arguments and must have a return type of `TRIGGER` or `EVENT_TRIGGER`.

```
CREATE OR REPLACE FUNCTION ABORT_DROP_COMMAND()
  RETURNS EVENT_TRIGGER
  AS $$
BEGIN
  RAISE EXCEPTION 'The % Command is Disabled', tg_tag;
END; $$
LANGUAGE PLPGSQL;

CREATE FUNCTION
```

Create the event trigger, which runs before the start of a DDL `DROP` command.

```
CREATE EVENT TRIGGER trg_abort_drop_command
  ON DDL_COMMAND_START
  WHEN TAG IN ('DROP TABLE', 'DROP VIEW', 'DROP FUNCTION', 'DROP
    SEQUENCE', 'DROP MATERIALIZED VIEW', 'DROP TYPE')
  EXECUTE PROCEDURE abort_drop_command();
```

Test the trigger by attempting to drop the `EMPLOYEES` table.

```
DROP TABLE EMPLOYEES;

ERROR: The DROP TABLE Command is Disabled
CONTEXT: PL/pgSQL function abort_drop_command() line 3 at RAISE
```

## Summary



| Feature | SQL Server |  Aurora PostgreSQL  | 
| --- | --- | --- | 
|  DML Triggers Scope  |  Statement level only  |   `FOR EACH ROW` and `FOR EACH STATMENT`   | 
|  Access to change set  |   `INSERTED` and `DELETED` virtual multi-row tables  |   `OLD` and `NEW` virtual one-row tables or the whole view of changed rows  | 
|  System event triggers  |  DDL, DCL, and other event types  |  Event triggers  | 
|  Trigger run phase  |   `AFTER` and `INSTEAD OF`   |   `AFTER`, `BEFORE`, and `INSTEAD OF`   | 
|  Multi-trigger run order  |  Can only set first and last using `sp_settriggerorder`   |  Call function within a function  | 
|  Drop a trigger  |   `DROP TRIGGER <trigger name>;`   |   `DROP TRIGGER <trigger name>;`   | 
|  Modify trigger code  |  Use the `ALTER TRIGGER` statement  |  Modify function code  | 
|  Enable or disable a trigger  |  Use the `ALTER TRIGGER <trigger name> ENABLE;` and `ALTER TRIGGER <trigger name> DISABLE;`   |   `ALTER TABLE`   | 
|  Triggers on views  |   `INSTEAD OF` triggers only  |   `INSTEAD OF` triggers only  | 

For more information, see [Trigger Functions](https://www.postgresql.org/docs/13/plpgsql-trigger.html) in the *PostgreSQL documentation*.

# Top fetch for T-SQL


This topic provides reference information about feature compatibility between Microsoft SQL Server 2019 and Amazon Aurora PostgreSQL, specifically focusing on result set limiting and paging. You can understand how SQL Server’s TOP and FETCH clauses compare to PostgreSQL’s LIMIT and OFFSET functionality. The topic explains the differences in syntax and capabilities, helping you navigate the transition from SQL Server to Aurora PostgreSQL.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Four star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-4.png)   |   ![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-4.png)   |   [TOP and FETCH](chap-sql-server-aurora-pg.tools.actioncode.md#chap-sql-server-aurora-pg.tools.actioncode.fetch)   |  PostgreSQL doesn’t support TOP.  | 

## SQL Server Usage


SQL Server supports two options for limiting and paging result sets returned to the client. `TOP` is a legacy, proprietary T-SQL keyword that is still supported due to its wide usage. The ANSI compliant syntax of `FETCH` and `OFFSET` were introduced in SQL Server 2012 and are recommended for paginating results sets.

### TOP


The `TOP (n)` operator is used in the `SELECT` list and limits the number of rows returned to the client based on the `ORDER BY` clause.

**Note**  
When `TOP` is used with no `ORDER BY` clause, the query is non-deterministic and may return any rows up to the number specified by the `TOP` operator.

You can use `TOP (n)` with two modifier options:
+  `TOP (n) PERCENT` is used to designate a percentage of the rows to be returned instead of a fixed maximal row number `limit (n)`. When you use `PERCENT`, `n` can be any value from 1-100.
+  `TOP (n) WITH TIES` is used to allow overriding the n maximal number or percentage of rows specified in case there are additional rows with the same ordering values as the last row.

If you use `TOP (n)` without `WITH TIES` and there are additional rows that have the same ordering value as the last row in the group of n rows, the query is also non-deterministic because the last row may be any of the rows that share the same ordering value.

### Syntax


```
ORDER BY <Ordering Expression> [ ASC | DESC ] [ ,...n ]
OFFSET <Offset Expression> { ROW | ROWS }
[FETCH { FIRST | NEXT } <Page Size Expression> { ROW | ROWS } ONLY ]
```

### Examples


The following example creates the OrderItems table.

```
CREATE TABLE OrderItems
(
  OrderID INT NOT NULL,
  Item VARCHAR(20) NOT NULL,
  Quantity SMALLINT NOT NULL,
  PRIMARY KEY(OrderID, Item)
);
```

```
INSERT INTO OrderItems (OrderID, Item, Quantity)
VALUES
(1, 'M8 Bolt', 100),
(2, 'M8 Nut', 100),
(3, 'M8 Washer', 200),
(3, 'M6 Locking Nut', 300);
```

The following example retrieves the 3 most ordered items by quantity.

```
-- Using TOP
SELECT TOP (3) *
FROM OrderItems
ORDER BY Quantity DESC;

-- USING FETCH
SELECT *
FROM OrderItems
ORDER BY Quantity DESC
OFFSET 0 ROWS FETCH NEXT 3 ROWS ONLY;
```

For the preceding example, the result looks as shown following.

```
OrderID  Item            Quantity
3        M6 Locking Nut  300
3        M8 Washer       200
2        M8 Nut          100
```

The following example includes rows with ties.

```
SELECT TOP (3) WITH TIES *
FROM OrderItems
ORDER BY Quantity DESC;
```

For the preceding example, the result looks as shown following.

```
OrderID  Item            Quantity
3        M6 Locking Nut  300
3        M8 Washer       200
2        M8 Nut          100
1        M8 Bolt         100
```

The following example retrieves half the rows based on quantity.

```
SELECT TOP (50) PERCENT *
FROM OrderItems
ORDER BY Quantity DESC;
```

For the preceding example, the result looks as shown following.

```
OrderID  Item            Quantity
3        M6 Locking Nut  300
3        M8 Washer       200
```

For more information, see [SELECT - ORDER BY Clause (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/queries/select-order-by-clause-transact-sql?view=sql-server-ver15) and [TOP (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/queries/top-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


 Amazon Aurora PostgreSQL-Compatible Edition (Aurora PostgreSQL) supports the non-ANSI compliant but popular with other engines `LIMIT…​ OFFSET` operator for paging results sets.

The `LIMIT` clause limits the number of rows returned and doesn’t require an `ORDER BY` clause, although that would make the query non-deterministic.

The `OFFSET` clause is zero-based, similar to SQL Server and used for pagination. `OFFSET 0` is the same as omitting the `OFFSET` clause, as is `OFFSET` with a NULL argument.

### Syntax


```
SELECT select_list
  FROM table_expression
  [ ORDER BY ... ]
  [ LIMIT { number | ALL } ] [ OFFSET number ]
```

### Migration Considerations


You can use the `LIMIT…​ OFFSET` syntax to replace the functionality of `TOP(n)` and `FETCH…​ OFFSET` in SQL Server. It is automatically converted by the AWS Schema Conversion Tool (AWS SCT) except for the `WITH TIES` and `PERCENT` modifiers.

To replace the `PERCENT` option, first calculate how many rows the query returns and then calculate the fixed number of rows to be returned based on that number.

**Note**  
Because this technique involves added complexity and accessing the table twice, consider changing the logic to use a fixed number instead of percentage.

To replace the `WITH TIES` option, rewrite the logic to add another query that checks for the existence of additional rows that have the same ordering value as the last row returned from the `LIMIT` clause.

**Note**  
Because this technique introduces significant added complexity and three accesses to the source table, consider changing the logic to introduce a tie-breaker into the `ORDER BY` clause.

### Examples


The following example creates the OrderItems table.

```
CREATE TABLE OrderItems
(
  OrderID INT NOT NULL,
  Item VARCHAR(20) NOT NULL,
  Quantity SMALLINT NOT NULL,
  PRIMARY KEY(OrderID, Item)
);
```

```
INSERT INTO OrderItems (OrderID, Item, Quantity)
VALUES
(1, 'M8 Bolt', 100),
(2, 'M8 Nut', 100),
(3, 'M8 Washer', 200),
(3, 'M6 Locking Nut', 300);
```

The following example retrieves the three most ordered items by quantity.

```
SELECT *
FROM OrderItems
ORDER BY Quantity DESC
LIMIT 3 OFFSET 0;
```

For the preceding example, the result looks as shown following.

```
OrderID  Item            Quantity
3        M6 Locking Nut  300
3        M8 Washer       200
1        M8 Bolt         100
```

The following example includes rows with ties.

```
SELECT *
FROM
(
  SELECT *
  FROM OrderItems
  ORDER BY Quantity DESC
  LIMIT 3 OFFSET 0
) AS X
UNION
SELECT *
FROM OrderItems
WHERE Quantity = (
  SELECT Quantity
  FROM OrderItems
  ORDER BY Quantity DESC
  LIMIT 1 OFFSET 2
)
ORDER BY Quantity DESC
```

For the preceding example, the result looks as shown following.

```
OrderID  Item            Quantity
3        M6 Locking Nut  300
3        M8 Washer       200
2        M8 Nut          100
1        M8 Bolt         100
```

The following example retrieves half the rows based on quantity.

```
CREATE or replace FUNCTION getOrdersPct(int) RETURNS SETOF OrderItems AS $$
SELECT * FROM OrderItems
ORDER BY Quantity desc LIMIT (SELECT COUNT(*)*$1/100 FROM OrderItems) OFFSET 0;
$$ LANGUAGE SQL;
```

```
SELECT * from getOrdersPct(50);
or
SELECT getOrdersPct(50);

OrderID  Item            Quantity
3        M6 Locking Nut  300
3        M8 Washer       200
```

## Summary



| SQL Server |  Aurora PostgreSQL  | Comments | 
| --- | --- | --- | 
|   `TOP (n)`   |   `LIMIT n`   |  | 
|   `TOP (n) WITH TIES`   |  Not supported  |  See examples for workaround  | 
|   `TOP (n) PERCENT`   |  Not supported  |  See examples for workaround  | 
|   `OFFSET…​ FETCH`   |   `LIMIT…​ OFFSET`   |  | 

For more information, see [LIMIT and OFFSET](https://www.postgresql.org/docs/13/queries-limit.html) in the *PostgreSQL documentation*.

# User-defined functions for T-SQL


This topic provides reference information about User-Defined Functions (UDFs) in SQL Server and their compatibility with PostgreSQL. It introduces the types of UDFs supported in SQL Server, including scalar functions, table-valued functions, and multi-statement table-valued functions. The topic explains the characteristics of UDFs, such as their inability to modify database structures or data outside their scope, and the distinction between deterministic and non-deterministic functions.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-3.png)   |   ![\[Three star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-3.png)   |  N/A  |  Syntax and option differences.  | 

## SQL Server Usage


User-Defined Functions (UDF) are code objects that accept input parameters and return either a scalar value or a set consisting of rows and columns. You can use T-SQL or Common Language Runtime (CLR) code to implement SQL Server UDFs.

**Note**  
This section doesn’t cover CLR code objects.

Function invocations can’t have any lasting impact on the database. They must be contained and can only modify objects and data local to their scope (for example, data in local variables). Functions aren’t allowed to modify data or the structure of a database.

Functions may be deterministic or non-deterministic. Deterministic functions always return the same result when you run them with the same data. Non-deterministic functions may return different results each time they run. For example, a function that returns the current date or time.

SQL Server supports three types of T-SQL UDFs: Scalar Functions, Table-Valued Functions, and Multi-Statement Table-Valued Functions.

SQL Server 2019 adds scalar user-defined functions (UDF) inlining. Inlining transforms functions into relational expressions and embeds them in the calling SQL query. This transformation improves the performance of workloads that take advantage of scalar UDFs. Scalar UDF inlining facilitates cost-based optimization of operations inside UDFs. The results are efficient, set-oriented, and parallel instead of inefficient, iterative, serial run plans. For more information, see [Scalar UDF Inlining](https://docs.microsoft.com/en-us/sql/relational-databases/user-defined-functions/scalar-udf-inlining?view=sql-server-ver15) in the *SQL Server documentation*.

### Scalar User-Defined Functions


Scalar UDFs accept zero or more parameters and return a scalar value. You can use scalar UDFs in T-SQL expressions.

#### Syntax


```
CREATE FUNCTION <Function Name> ([{<Parameter Name> [AS] <Data Type> [= <Default
Value>] [READONLY]} [,...n]])
RETURNS <Return Data Type>
[AS]
BEGIN
<Function Body Code>
RETURN <Scalar Expression>
END[;]
```

#### Examples


The following example creates a scalar function to change the first character of a string to upper case.

```
CREATE FUNCTION dbo.UpperCaseFirstChar (@String VARCHAR(20))
RETURNS VARCHAR(20)
AS
BEGIN
RETURN UPPER(LEFT(@String, 1)) + LOWER(SUBSTRING(@String, 2, 19))
END;
```

```
SELECT dbo.UpperCaseFirstChar ('mIxEdCasE');

Mixedcase
```

### User-Defined Table-Valued Functions


Inline table-valued UDFs are similar to views or a Common Table Expressions (CTE) with the added benefit of parameters. You can use inline table-valued UDFs in `FROM` clauses as subqueries. Also, you can join inline table-valued UDFs to other source table rows using the `APPLY` and `OUTER APPLY` operators. In-line table-valued UDFs have many associated internal optimizer optimizations due to their simple, view-like characteristics.

#### Syntax


```
CREATE FUNCTION <Function Name> ([{<Parameter Name> [AS] <Data Type> [= <Default
Value>] [READONLY]} [,...n]])
RETURNS TABLE
[AS]
RETURN (<SELECT Query>)[;]
```

#### Examples


The following example creates a table-valued function to aggregate employee orders.

```
CREATE TABLE Orders
(
  OrderID INT NOT NULL PRIMARY KEY,
  EmployeeID INT NOT NULL,
  OrderDate DATETIME NOT NULL
);
```

```
INSERT INTO Orders (OrderID, EmployeeID, OrderDate)
VALUES
(1, 1, '20180101 13:00:05'),
(2, 1, '20180201 11:33:12'),
(3, 2, '20180112 10:22:35');
```

```
CREATE FUNCTION dbo.EmployeeMonthlyOrders
(@EmployeeID INT)
RETURNS TABLE AS
RETURN
(
  SELECT EmployeeID,
    YEAR(OrderDate) AS OrderYear,
    MONTH(OrderDate) AS OrderMonth,
    COUNT(*) AS NumOrders
  FROM Orders AS O
  WHERE EmployeeID = @EmployeeID
  GROUP BY EmployeeID,
    YEAR(OrderDate),
    MONTH(OrderDate)
);
```

```
SELECT *
FROM dbo.EmployeeMonthlyOrders (1)

EmployeeID  OrderYear  OrderMonth  NumOrders
1           2018       1           1
1           2018       2           1
```

### Multi-Statement User-Defined Table-Valued Functions


Multi-statement table-valued UDFs, such as In-line UDFs, are also similar to views or CTEs with the added benefit of parameters. You can use multi-statement table-valued UDFs in `FROM` clauses as sub queries. Also, you can join multi-statement table-valued UDFs to other source table rows using the `APPLY` and `OUTER APPLY` operators.

The difference between multi-statement UDFs and the inline UDFs is that multi-statement UDFs aren’t restricted to a single `SELECT` statement. They can consist of multiple statements including logic implemented with flow control, complex data processing, security checks, and so on.

The downside of using multi-statement UDFs is that there are far less optimizations possible and performance may suffer.

#### Syntax


```
CREATE FUNCTION <Function Name> ([{<Parameter Name> [AS] <Data Type> [= <Default
Value>] [READONLY]} [,...n]])
RETURNS <@Return Variable> TABLE <Table Definition>
[AS]
BEGIN
<Function Body Code>
RETURN
END[;]
```

For more information, see [CREATE FUNCTION (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/statements/create-function-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


For more information, see [Stored Procedures](chap-sql-server-aurora-pg.tsql.storedprocedures.md).

### Syntax


```
CREATE [ OR REPLACE ] FUNCTION
name ( [ [ argmode ] [ argname ] argtype [ { DEFAULT | = } default_expr ] [, ...]] )
[ RETURNS rettype
| RETURNS TABLE ( column_name column_type [, ...] ) ]
{ LANGUAGE lang_name
| TRANSFORM { FOR TYPE type_name } [, ... ]
| WINDOW
| IMMUTABLE | STABLE | VOLATILE | [ NOT ] LEAKPROOF
| CALLED ON NULL INPUT | RETURNS NULL ON NULL INPUT | STRICT
| [ EXTERNAL ] SECURITY INVOKER | [ EXTERNAL ] SECURITY DEFINER
| PARALLEL { UNSAFE | RESTRICTED | SAFE }
| COST execution_cost
| ROWS result_rows
| SET configuration_parameter { TO value | = value | FROM CURRENT }
| AS 'definition'
| AS 'obj_file', 'link_symbol'
} ...
[ WITH ( attribute [, ...] ) ]
```

# User-defined types for T-SQL


This topic provides reference information about user-defined types in SQL Server and PostgreSQL, which is valuable for database administrators and developers migrating from Microsoft SQL Server 2019 to Amazon Aurora PostgreSQL. You can gain insight into how both database systems implement custom data types, including their similarities and differences.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Four star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-4.png)   |   ![\[Four star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-4.png)   |  N/A  |  Syntax and option differences.  | 

## SQL Server Usage


SQL Server user-defined types provide a mechanism for encapsulating custom data types and for adding NULL constraints.

SQL Server also supports table-valued user-defined types, which you can use to pass a set of values to a stored procedure.

User-defined types can also be associated to CLR code assemblies. Beginning with SQL Server 2014, memory optimized types support memory optimized tables and code.

**Note**  
If your code uses custom rules bound to data types, Microsoft recommends discontinuing the use of this deprecated feature.

All user-defined types are based on an existing system data types. They allow developers to reuse the definition, making the code and schema more readable.

### Syntax


The simplified syntax for the `CREATE TYPE` statement is shown following.

```
CREATE TYPE <type name> {
FROM <base type> [ NULL | NOT NULL ] | AS TABLE (<Table Definition>)}
```

### User-Defined Types Examples


The following example creates a `ZipCode` scalar user-defined type.

```
CREATE TYPE ZipCode
FROM CHAR(5)
NOT NULL
```

The following example uses this `ZipCode` type in a table.

```
CREATE TABLE UserLocations
(UserID INT NOT NULL PRIMARY KEY, ZipCode ZipCode);

INSERT INTO [UserLocations] ([UserID],[ZipCode]) VALUES (1, '94324');
INSERT INTO [UserLocations] ([UserID],[ZipCode]) VALUES (2, NULL);
```

The code in the preceding example displays the following error message indicating that NULL values for `ZipCode` aren’t allowed.

```
Msg 515, Level 16, State 2, Line 78
Can't insert the value NULL into column 'ZipCode', table 'tempdb.dbo.UserLocations';
column does not allow nulls. INSERT fails.
The statement has been terminated.
```

### Table-Valued Types Examples


The following example demonstrates how to create and use a table-valued types to pass a set of values to a stored procedure.

Create the `OrderItems` table.

```
CREATE TABLE OrderItems
(
  OrderID INT NOT NULL,
  Item VARCHAR(20) NOT NULL,
  Quantity SMALLINT NOT NULL,
  PRIMARY KEY(OrderID, Item)
);
```

Create a table-valued type for the `OrderItems` table.

```
CREATE TYPE OrderItems
AS TABLE
(
  OrderID INT NOT NULL,
  Item VARCHAR(20) NOT NULL,
  Quantity SMALLINT NOT NULL,
  PRIMARY KEY(OrderID, Item)
);
```

Create the InsertOrderItems procedure. Note that the entire set of rows from the table-valued parameter is handled with one statement.

```
CREATE PROCEDURE InsertOrderItems
@OrderItems AS OrderItems READONLY
AS
BEGIN
  INSERT INTO OrderItems(OrderID, Item, Quantity)
  SELECT OrderID,
    Item,
    Quantity
  FROM @OrderItems;
END
```

Instantiate the OrderItems type, insert the values, and pass it to a stored procedure.

```
DECLARE @OrderItems AS OrderItems;

INSERT INTO @OrderItems ([OrderID], [Item], [Quantity])
VALUES
(1, 'M8 Bolt', 100),
(1, 'M8 Nut', 100),
(1, M8 Washer, 200);

EXECUTE [InsertOrderItems] @OrderItems = @OrderItems;

(3 rows affected)
```

Select all rows from the OrderItems table.

```
SELECT * FROM OrderItems;

OrderID  Item       Quantity
1        M8 Bolt    100
1        M8 Nut     100
1        M8 Washer  200
```

For more information, see [CREATE TYPE (Transact-SQL)](https://docs.microsoft.com/en-us/sql/t-sql/statements/create-type-transact-sql?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


Similar to SQL Server, PostgreSQL enables the creation of user-defined types using the `CREATE TYPE` statement.

A user-defined type is owned by the user who creates it. If a schema name is specified, the type is created under that schema.

PostgreSQL supports the creation of several different user-defined types: \$1 Composite types store a single named attribute attached to a data type or multiple attributes as an attribute collection. In PostgreSQL, you can also use the CREATE TYPE statement standalone with an association to a table. \$1 Enumerated types (enum) store a static ordered set of values. For example, product categories.

\$1

```
CREATE TYPE PRODUCT_CATEGORT AS ENUM
  ('Hardware', 'Software', 'Document');
```
+ Range Types store a range of values, for example, a range of timestamps used to represent the ranges of time of when a course is scheduled.

  ```
  CREATE TYPE float8_range AS RANGE
    (subtype = float8, subtype_diff = float8mi);
  ```

  For more information, see [Range Types](https://www.postgresql.org/docs/13/rangetypes.html) in the *PostgreSQL documentation*.
+ Base types are the system core types (abstract types) and are implemented in a low-level language such as C.
+ Array types support definition of columns as multidimensional arrays. You can create an array column with a built-in type or a user-defined base type, enum type, or composite.

  ```
  CREATE TABLE COURSE_SCHEDULE (
    COURSE_ID NUMERIC PRIMARY KEY,
    COURSE_NAME VARCHAR(60),
    COURSE_SCHEDULES text[]);
  ```

  For more information, see [Arrays](https://www.postgresql.org/docs/13/arrays.html) in the *PostgreSQL documentation*.

### Syntax


```
CREATE TYPE name AS RANGE (
  SUBTYPE = subtype
  [ , SUBTYPE_OPCLASS = subtype_operator_class ]
  [ , COLLATION = collation ]
  [ , CANONICAL = canonical_function ]
  [ , SUBTYPE_DIFF = subtype_diff_function ]
)
CREATE TYPE name (
  INPUT = input_function,
  OUTPUT = output_function
  [ , RECEIVE = receive_function ]
  [ , SEND = send_function ]
  [ , TYPMOD_IN = type_modifier_input_function ]
  [ , TYPMOD_OUT = type_modifier_output_function ]
  [ , ANALYZE = analyze_function ]
  [ , INTERNALLENGTH = { internallength | VARIABLE } ]
  [ , PASSEDBYVALUE ]
  [ , ALIGNMENT = alignment ]
  [ , STORAGE = storage ]
  [ , LIKE = like_type ]
  [ , CATEGORY = category ]
  [ , PREFERRED = preferred ]
  [ , DEFAULT = default ]
  [ , ELEMENT = element ]
  [ , DELIMITER = delimiter ]
  [ , COLLATABLE = collatable ]
)
```

### Examples


The following example creates a user-defined type for storing an employee phone numbers.

```
CREATE TYPE EMP_PHONE_NUM AS (
  PHONE_NUM VARCHAR(11));

CREATE TABLE EMPLOYEES (
  EMP_ID NUMERIC PRIMARY KEY,
  EMP_PHONE EMP_PHONE_NUM NOT NULL);

INSERT INTO EMPLOYEES VALUES(1, ROW('111-222-333'));

SELECT a.EMP_ID, (a.EMP_PHONE).PHONE_NUM FROM EMPLOYEES a;

emp_id  phone_num
1       111-222-333
(1 row)
```

The following example creates a PostgreSQL Object Type as a collection of Attributes for the employees table.

```
CREATE OR REPLACE TYPE EMP_ADDRESS AS OBJECT (
  STATE VARCHAR(2),
  CITY VARCHAR(20),
  STREET VARCHAR(20),
  ZIP_CODE NUMERIC);

CREATE TABLE EMPLOYEES (
  EMP_ID NUMERIC PRIMARY KEY,
  EMP_NAME VARCHAR(10) NOT NULL,
  EMP_ADDRESS EMP_ADDRESS NOT NULL);

INSERT INTO EMPLOYEES
  VALUES(1, 'John Smith',
  ('AL', 'Gulf Shores', '3033 Joyce Street', '36542'));

SELECT a.EMP_NAME,
  (a.EMP_ADDRESS).STATE,
  (a.EMP_ADDRESS).CITY,
  (a.EMP_ADDRESS).STREET,
  (a.EMP_ADDRESS).ZIP_CODE
FROM EMPLOYEES a;

emp_name    state  city         street             zip_code
John Smith  AL     Gulf Shores  3033 Joyce Street  36542
```

For more information, see [CREATE TYPE](https://www.postgresql.org/docs/13/sql-createtype.html) and [Composite Types](https://www.postgresql.org/docs/13/rowtypes.htm) in the *PostgreSQL documentation*.

# Identity and sequences for T-SQL


This topic provides reference information comparing automatic enumeration features between Microsoft SQL Server 2019 and Amazon Aurora PostgreSQL. It focuses on how these databases handle sequence generation and identity columns, which are commonly used for creating surrogate keys in relational database systems.


| Feature compatibility |  AWS SCT / AWS DMS automation level |  AWS SCT action code index | Key differences | 
| --- | --- | --- | --- | 
|   ![\[Three star feature compatibility\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-compatibility-3.png)   |   ![\[Three star automation level\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/images/pb-automation-3.png)   |  N/A  |  Less options with `SERIAL`. Reseeding needs to be rewritten.  | 

## SQL Server Usage


Automatic enumeration functions and columns are common with relational database management systems and are often used for generating surrogate keys.

SQL Server provides several features that support automatic generation of monotonously increasing value generators.
+  `IDENTITY` property of a table column.
+  `SEQUENCE` objects framework.
+ Numeric functions such as `IDENTITY` and `NEWSEQUENTIALID`.

### Identity


The `IDENTITY` property is probably the most widely used means of generating surrogate primary keys in SQL Server applications. Each table may have a single numeric column assigned as an `IDENTITY`, using the `CREATE TABLE` or `ALTER TABLE` DDL statements. You can explicitly specify a starting value and increment.

**Note**  
The identity property doesn’t enforce uniqueness of column values, indexing, or any other property. Additional constraints such as primary or unique keys, explicit index specifications, or other properties must be specified in addition to the `IDENTITY` property.

The `IDENTITY` value is generated as part of the transaction that inserts table rows. Applications can obtain `IDENTITY` values using the `@@IDENTITY`, `SCOPE_IDENTITY`, and `IDENT_CURRENT` functions.

You can manage `IDENTITY` columns using the `DBCC CHECKIDENT` command, which provides functionality for reseeding and altering properties.

#### Syntax


```
IDENTITY [(<Seed Value>, <Increment Value>)]
```

#### Examples


The following example creates a table with an `IDENTITY` column.

```
CREATE TABLE MyTABLE
(
  Col1 INT NOT NULL
  PRIMARY KEY NONCLUSTERED IDENTITY(1,1),
  Col2 VARCHAR(20) NOT NULL
);
```

The following example inserts a row and retrieve the generated `IDENTITY` value.

```
DECLARE @LastIdent INT;
INSERT INTO MyTable(Col2)
VALUES('SomeString');
SET @LastIdent = SCOPE_IDENTITY()
```

The following example creates a table with a non-key `IDENTITY` column and an increment of 10.

```
CREATE TABLE MyTABLE
(
  Col1 VARCHAR(20) NOT NULL
  PRIMARY KEY,
  Col2 INT NOT NULL
  IDENTITY(1,10),
);
```

The following example creates a table with a compound primary key including an `IDENTITY` column.

```
CREATE TABLE MyTABLE
(
  Col1 VARCHAR(20) NOT NULL,
  Col2 INT NOT NULL
  IDENTITY(1,10),
  PRIMARY KEY (Col1, Col2)
);
```

### SEQUENCE


Sequences are objects that are independent of a particular table or column and are defined using the `CREATE SEQUENCE` DDL statement. You can manage sequences using the `ALTER SEQUENCE` statement. Multiple tables and multiple columns from the same table may use the values from one or more `SEQUENCE` objects.

You can retrieve a value from a `SEQUENCE` object using the `NEXT VALUE FOR` function. For example, a `SEQUENCE` value can be used as a default value for a surrogate key column.

 `SEQUENCE` objects provide several advantages over `IDENTITY` columns:
+ You can use `SEQUENCE` objects to obtain a value before the actual `INSERT` takes place.
+ You can share value series among columns and tables.
+ Easier management, restart, and modification of sequence properties.
+ Allows assignment of value ranges using `sp_sequence_get_range` and not just per-row values.

#### Syntax


```
CREATE SEQUENCE <Sequence Name> [AS <Integer Data Type> ]
START WITH <Seed Value>
INCREMENT BY <Increment Value>;
```

```
ALTER SEQUENCE <Sequence Name>
RESTART [WITH <Reseed Value>]
INCREMENT BY <New Increment Value>;
```

#### Examples


The following example creates sequence and uses it for a primary key default.

```
CREATE SEQUENCE MySequence AS INT START WITH 1 INCREMENT BY 1;
CREATE TABLE MyTable
(
  Col1 INT NOT NULL
  PRIMARY KEY NONCLUSTERED DEFAULT (NEXT VALUE FOR MySequence),
  Col2 VARCHAR(20) NULL
);
```

```
INSERT MyTable (Col1, Col2) VALUES (DEFAULT, 'cde'), (DEFAULT, 'xyz');
```

```
SELECT * FROM MyTable;
```

```
Col1  Col2
1     cde
2     xyz
```

### Identity


SQL Server provides two sequential generation functions: `IDENTITY` and `NEWSEQUENTIALID`.

**Note**  
The IDENTITY function should not be confused with the IDENTITY property of a column.

You can use the IDENTITY function only in a `SELECT …​ INTO` statement to insert `IDENTITY` column values into a new table.

The `NEWSEQUNTIALID` function generates a hexadecimal GUID, which is an integer. While the `NEWID` function generates a random GUID, the `NEWSEQUENTIALID` function guarantees that every GUID created is greater (in numeric value) than any other GUID previously generated by the same function on the same server since the operating system restart.

You can use `NEWSEQUENTIALID` only with `DEFAULT` constraints associated with columns having a `UNIQUEIDENTIFIER` data type.

#### Syntax


```
IDENTITY (<Data Type> [, <Seed Value>, <Increment Value>]) [AS <Alias>]
```

```
NEWSEQUENTIALID()
```

#### Examples


The following example uses the `IDENTITY` function as surrogate key for a new table based on an existing table.

```
CREATE TABLE MySourceTable
(
  Col1 INT NOT NULL PRIMARY KEY,
  Col2 VARCHAR(10) NOT NULL,
  Col3 VARCHAR(10) NOT NULL
);
```

```
INSERT INTO MySourceTable
VALUES
(12, 'String12', 'String12'),
(25, 'String25', 'String25'),
(95, 'String95', 'String95');
```

```
SELECT IDENTITY(INT, 100, 1) AS SurrogateKey,
  Col1,
  Col2,
  Col3
INTO MyNewTable
FROM MySourceTable
ORDER BY Col1 DESC;
```

```
SELECT *
FROM MyNewTable;
```

```
SurrogateKey  Col1  Col2      Col3
100           95    String95  String95
101           25    String25  String25
102           12    String12  String12
```

The following example uses `NEWSEQUENTIALID` as a surrogate key for a new table.

```
CREATE TABLE MyTable
(
  Col1 UNIQUEIDENTIFIER NOT NULL
  PRIMARY KEY NONCLUSTERED DEFAULT NEWSEQUENTIALID()
);
```

```
INSERT INTO MyTable
DEFAULT VALUES;
```

```
SELECT *
FROM MyTable;
```

```
Col1

9CC01320-C5AA-E811-8440-305B3A017068
```

For more information, see [Sequence Numbers](https://docs.microsoft.com/en-us/sql/relational-databases/sequence-numbers/sequence-numbers?view=sql-server-ver15) and [CREATE TABLE (Transact-SQL) IDENTITY (Property)](https://docs.microsoft.com/en-us/sql/t-sql/statements/create-table-transact-sql-identity-property?view=sql-server-ver15) in the *SQL Server documentation*.

## PostgreSQL Usage


The PostgreSQL `CREATE SEQUENCE` command is mostly compatible with the SQL Server `CREATE SEQUENCE` command. Sequences in PostgreSQL serve the same purpose as in SQL Server; they generate numeric identifiers automatically. A sequence object is owned by the user that created it.

### Sequence Parameters

+  `TEMPORARY` or `TEMP` — PostgreSQL can create a temporary sequence within a session. Once the session ends, the sequence is automatically dropped.
+  `IF NOT EXISTS` — Creates a sequence. If a sequence with an identical name already exists, it is replaced.
+  `INCREMENT BY` — An optional parameter with a default value of 1. Positive values generate sequence values in ascending order. Negative values generate sequence values in descending sequence.
+  `START WITH` — An optional parameter having a default of 1. It uses the MINVALUE for ascending sequences and the MAXVALUE for descending sequences.
+  `MAXVALUE` \$1 `NO MAXVALUE` — Defaults are between 263 for ascending sequences and -1 for descending sequences.
+  `MINVALUE` \$1 `NO MINVALUE` — Defaults are between 1 for ascending sequences and -263 for descending sequences.
+  `CYCLE` \$1 `NO CYCLE` — If the sequence value reaches `MAXVALUE` or `MINVALUE`, the `CYCLE` parameter instructs the sequence to return to the initial value (`MINVALUE` or `MAXVALUE`). The default is `NO CYCLE`.
+  `CACHE` — In PostgreSQL, the `NOCACHE` isn’t supported. By default, when the `CACHE` parameter isn’t specified, no sequence values are pre-cached into memory (equivalent to the SQL Server `NOCACHE` parameter). The minimum value is 1.
+  `OWNED BY` \$1 `OWNBY NON` — Specifies that the sequence object is to be associated with a specific column in a table. When dropping this type of sequence, an error is returned due to the sequence/table association.
+  `AS data_type` — This option is available in PostgreSQL version 10 and higher. To easily determine the minimum and maximum values and also improve storage management, you can select the data type for the sequence. The available data types are smallint, integer, and bigint. The default data type is bigint.

### Syntax


```
CREATE [ TEMPORARY | TEMP ] SEQUENCE [ IF NOT EXISTS ] name
[ INCREMENT [ BY ] increment ]
[ MINVALUE minvalue | NO MINVALUE ] [ MAXVALUE maxvalue | NO MAXVALUE ]
[ START [ WITH ] start ] [ CACHE cache ] [ [ NO ] CYCLE ]
[ OWNED BY { table_name.column_name | NONE } ]
```

Most SQL Server `CREATE SEQUENCE` parameters are compatible with PostgreSQL.

### Examples


The following example creates a sequence.

```
CREATE SEQUENCE SEQ_1 START WITH 100
  INCREMENT BY 1 MAXVALUE 99999999999 CACHE 20 NO CYCLE;
```

The following example drops a sequence.

```
DROP SEQUENCE SEQ_1;
```

View sequences created in the current schema and sequence specifications.

```
SELECT * FROM INFORMATION_SCHEMA.SEQUENCES;
OR
\ds
```

The following example uses a PostgreSQL sequence as part of a `CREATE TABLE` and an `INSERT` statement.

```
CREATE TABLE SEQ_TST
(COL1 NUMERIC DEFAULT NEXTVAL('SEQ_1') PRIMARY KEY, COL2 VARCHAR(30));
INSERT INTO SEQ_TST (COL2) VALUES('A');
SELECT * FROM SEQ_TST;

col1  col2
100   A
```

Use the OWNED BY parameter to associate the sequence with a table.

```
CREATE SEQUENCE SEQ_1 START WITH 100 INCREMENT BY 1 OWNED BY SEQ_TST.COL1;
```

Query the current value of a sequence.

```
SELECT CURRVAL('SEQ_1);
```

Manually increment a sequence value according to the `INCREMENT BY` value.

```
SELECT NEXTVAL('SEQ_1');
OR
SELECT SETVAL('SEQ_1', 200);
```

Alter an existing sequence.

```
ALTER SEQUENCE SEQ_1 MAXVALUE 1000000;
```

### IDENTITY Usage


Starting from PostgreSQL 10, there is a new option called identity columns which is similar to the `SERIAL` data type but more SQL standard compliant. The identity columns are slightly more compatible compared to SQL Server identity columns.

To create a table with identity columns, use the following statement:

```
CREATE TABLE emps (
  emp_id INTEGER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
  emp_name VARCHAR(35) NOT NULL);

INSERT INTO emps (emp_name) VALUES ('Robert');
INSERT INTO emps (emp_id, emp_name) VALUES (DEFAULT, 'Brian');

SELECT * FROM emps;

col1  col2
1     Robert
2     Brian
```

In PostgreSQL, for `SERIAL` and `IDENTITY`, you can insert any value, so long as it won’t violate the primary key constraint. If the value violates the primary key constraint and you use the identity column sequence value again, the following error might be raised:

```
SQL Error [23505]: ERROR: duplicate key value violates unique constraint "emps_iden_pkey"
Detail: Key (emp_id)=(2) already exists.
```

### SERIAL Usage


In PostgreSQL, you can create a sequence similar to the `IDENTITY` property supported by identity columns. When you create a new table, the sequence is created through the `SERIAL` pseudo-type. Other types from the same family are `SMALLSERIAL` and `BIGSERIAL`.

By assigning a `SERIAL` type to a column during table creation, PostgreSQL creates a sequence using the default configuration and adds a NOT NULL constraint to the column. The newly created sequence behaves like a regular sequence (incremented by 1) and no composite `SERIAL` option.

The following example uses `SERIAL` sequence.

```
CREATE TABLE SERIAL_SEQ_TST(COL1 SERIAL PRIMARY KEY, COL2 VARCHAR(10));

INSERT INTO SERIAL_SEQ_TST(COL2) VALUES('A');
SELECT * FROM SERIAL_SEQ_TST;

col1  col2
1     A

\ds

Schema  Name                     Type      Owner
public  serial_seq_tst_col1_seq  sequence  pg_tst_db
```

The following example uses the PostgreSQL `SERIAL` pseudo-type with a sequence that is created implicitly.

```
CREATE TABLE SERIAL_SEQ_TST(COL1 SERIAL PRIMARY KEY, COL2 VARCHAR(10));

\ds

Schema  Name                     Type      Owner
public  serial_seq_tst_col1_seq  sequence  pg_tst_db

ALTER SEQUENCE SERIAL_SEQ_TST_COL1_SEQ RESTART WITH 100 INCREMENT BY 10;
INSERT INTO SERIAL_SEQ_TST(COL2) VALUES('A');
INSERT INTO SERIAL_SEQ_TST(COL1, COL2) VALUES(DEFAULT, 'B');
SELECT * FROM SERIAL_SEQ_TST;

col1  col2
100   A
110   B
```

Use the `ALTER SEQUENCE` command to change the default sequence configuration in a `SERIAL` column.

Create a table with a `SERIAL` column that uses increments of 10:

```
CREATE TABLE SERIAL_SEQ_TST(COL1 SERIAL PRIMARY KEY, COL2 VARCHAR(10));

ALTER SEQUENCE serial_seq_tst_col1_seq INCREMENT BY 10;
```

**Note**  
The auto generated sequence’s name should be created with the following format: `TABLENAME_COLUMNNAME_seq`.

Create a table with a compound primary key including a `SERIAL` column:

```
CREATE TABLE SERIAL_SEQ_TST
(COL1 SERIAL, COL2 VARCHAR(10), PRIMARY key (COL1,COL2));
```

## Summary


The following table identifies similarities, differences, and key migration considerations.


| Feature | SQL Server |  Aurora PostgreSQL  | 
| --- | --- | --- | 
|  Independent `SEQUENCE` object  |   `CREATE SEQUENCE`   |   `CREATE SEQUENCE`   | 
|  Automatic enumerator column property  |   `IDENTITY`   |   `SERIAL` or `IDENTITY`   | 
|  Reseed sequence value  |   `DBCC CHECKIDENT`   |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/chap-sql-server-aurora-pg.tsql.sequences.html)  | 
|  Column restrictions  |  Numeric  |  Numeric  | 
|  Controlling seed and interval values  |   `CREATE/ALTER SEQUENCE`   |   `CREATE/ALTER SEQUENCE`   | 
|  Sequence setting initialization  |  Maintained through service restarts  |   `ALTER SEQUENCE`   | 
|  Explicit values to column  |  Not allowed by default, `SET IDENTITY_INSERT ON` required  |  Allowed  | 

For more information, see [CREATE SEQUENCE](https://www.postgresql.org/docs/13/sql-createsequence.html), [Sequence Manipulation Functions](https://www.postgresql.org/docs/13/functions-sequence.html), [Numeric Types](https://www.postgresql.org/docs/13/datatype-numeric.html), and [CREATE TABLE](https://www.postgresql.org/docs/13/sql-createtable.html) in the *PostgreSQL documentation*.