

 Amazon Redshift will no longer support the creation of new Python UDFs starting Patch 198. Existing Python UDFs will continue to function until June 30, 2026. For more information, see the [ blog post ](https://aws.amazon.com/blogs/big-data/amazon-redshift-python-user-defined-functions-will-reach-end-of-support-after-june-30-2026/). 

# Data sharing in Amazon Redshift
Data sharing

With Amazon Redshift, you can securely share data across Amazon Redshift clusters or with other AWS services. Data sharing lets you share live data, without having to create a copy or move it. Database administrators and data engineers can use data sharing to provide secure, read-only access to data for analytics purposes, while maintaining control over the data. Data analysts, business intelligence professionals, and data scientists can leverage shared data to gain insights without duplicating or moving data. Common use cases include sharing data with partners, enabling cross-functional analysis, and facilitating data democratization within an organization. The following sections cover the details of configuring and managing data sharing in Amazon Redshift.

With Amazon Redshift *data sharing*, you can securely share access to live data across Amazon Redshift clusters, workgroups, AWS accounts, and AWS Regions without manually moving or copying the data. Since the data is live, all users can see the most up-to-date and consistent information in Amazon Redshift as soon as it’s updated.

You can share data across provisioned clusters, serverless workgroups, Availability Zones, AWS accounts, and AWS Regions. You can share between cluster types as well as between provisioned clusters and serverless.

You can share database objects for both reads and writes across different Amazon Redshift clusters or Amazon Redshift Serverless workgroups within the same AWS account, or from one AWS account to another. You can read and write data across Regions as well. You can grant permissions such as SELECT, INSERT, and UPDATE for different tables and USAGE and CREATE for different schemas. The data is live and available to all warehouses as soon as a write transaction is committed.

## Data sharing use cases for Amazon Redshift
Data sharing use cases

Amazon Redshift data sharing is especially useful for these use cases:
+ **Supporting different kinds of business-critical workloads** – Use a central extract, transform, and load (ETL) cluster that shares data with multiple business intelligence (BI) or analytic clusters. This approach provides read workload isolation and chargeback for individual workloads. You can size and scale your individual workload compute according to the workload-specific requirements of price and performance.
+ **Enabling cross-group collaboration** – Enable seamless collaboration across teams and business groups for broader analytics, data science, and cross-product impact analysis.
+ **Delivering data as a service** – Share data as a service across your organization.
+ **Sharing data between environments** – Share data among development, test, and production environments. You can improve team agility by sharing data at different levels of granularity.
+ **Licensing access to data in Amazon Redshift** – List Amazon Redshift data sets in the AWS Data Exchange catalog that customers can find, subscribe to, and query in minutes.

### Data sharing write-access use cases
Multi-warehouse writes use cases

Data sharing for writes has several important use cases:
+ **Update business source data on the producer** – You can share data as a service across your organization, but then consumers can also perform actions on the source data. For instance, they can communicate back up-to-date values or acknowledge receipt of data. These are just a couple possible business use cases.
+ **Insert additional records on the producer** – Consumers can add records to the original source data. These can be marked as from the consumer, if needed.

For information specifically regarding how to perform write operations on a datashare, see [Sharing write access to data](https://docs.aws.amazon.com/redshift/latest/dg/getting-started-datashare-writes.html).

## Data sharing at different levels in Amazon Redshift
Data sharing at different levels

With Amazon Redshift, you can share data at different levels. These levels include databases, schemas, tables, views (including regular, late-binding, and materialized views), and SQL user-defined functions (UDFs). You can create multiple datashares for a given database. A datashare can contain objects from multiple schemas in the database on which sharing is created. 

By having this flexibility in sharing data, you get fine-grained access control. You can tailor this control for different users and businesses that need access to Amazon Redshift data.

## Consistency management for data sharing in Amazon Redshift
Data sharing consistency management

Amazon Redshift provides transactional consistency on all producer and consumer clusters and shares up-to-date and consistent views of the data with all consumers. 

You can continuously update data on the producer cluster. All queries on a consumer cluster within a transaction read the same state of the shared data. Amazon Redshift doesn't consider the data that was changed by another transaction on the producer cluster that was committed after the beginning of the transaction on the consumer cluster. After the data change is committed on the producer cluster, new transactions on the consumer cluster can immediately query the updated data.

The strong consistency removes the risks of lower-fidelity business reports that might contain invalid results during sharing of data. This factor is especially important for financial analysis or where the results might be used to prepare datasets that are used to train machine learning models.

# Considerations for data sharing in Amazon Redshift
Considerations

With Amazon Redshift *data sharing*, you can securely share access to live data across Amazon Redshift clusters, workgroups, AWS accounts, and AWS Regions without manually moving or copying the data. Previously, objects in datashares were read only in all circumstances. Writing to an object in a datashare is a new feature. Objects in datashares are only write-enabled when a producer specifically grants write privileges like INSERT or CREATE on objects to the datashare. Additionally, for cross-account sharing, a producer has to authorize the datashare for writes and the consumer has to associate specific clusters and workgroups for writes.

This section covers considerations when working with Amazon Redshift data sharing.

**Topics**
+ [

# General considerations for data sharing in Amazon Redshift
](considerations-datashare-general.md)
+ [

# Considerations for data sharing reads and writes in Amazon Redshift
](considerations-datashare-reads-writes.md)
+ [

# Considerations for data sharing in Amazon Redshift Serverless restore
](considerations-datashare-serverless-restore.md)
+ [

# Considerations for data sharing with data lake tables in Amazon Redshift
](considerations-datashare-datalake.md)
+ [

# Considerations for data sharing with AWS Lake Formation in Amazon Redshift
](lake-formation-considerations.md)
+ [

# Considerations for data sharing with AWS Data Exchange in Amazon Redshift
](adx-considerations.md)
+ [

# Permissions you can grant to datashares
](permissions-datashares.md)
+ [

# Supported SQL statements for data sharing writes on consumers
](multi-warehouse-writes-sql-statements.md)
+ [

# Unsupported SQL statements for data sharing writes on consumers
](multi-warehouse-writes-sql-statements-unsupported.md)

# General considerations for data sharing in Amazon Redshift
General considerations for data sharing

The following are general considerations when working with datashares in Amazon Redshift:
+ *Default database* – When you read data from a datashare, you remain connected to your local cluster database. For more information about setting up and reading from a database created from a datashare, see [Querying datashare objects](https://docs.aws.amazon.com/redshift/latest/mgmt/query-editor-v2-datashare-using.html#query-editor-v2-datashare-consumer) and [Materialized views on external data lake tables in Amazon Redshift SpectrumMaterialized views on external data lake tables](materialized-view-external-table.md).
+ *Connections* – You must be connected directly to a datashare database or run the USE command to write to datashares. You can also use three-part notation. The USE command is not supported on external tables. 
+ *Performance* – The performance of the queries on shared data depends on the compute capacity of the consumer clusters.
+ *Data transfer charges* – Cross-Region data sharing includes additional cross-Region data-transfer charges. 
  + These data-transfer charges don't apply within the same Region, only across Regions. For more information, see [Managing cost control for cross-Region data sharing](cross-region-billing.md). 
  + The consumer is charged for all compute and cross-region data transfer fees required to query the producer's data. The producer is charged for the underlying storage of data in their provisioned cluster or serverless namespace.
+ *Data sharing within and between clusters* – You only need datashares when you are sharing data between different Amazon Redshift provisioned clusters or serverless workgroups. Within the same cluster, you can query another database using simple three-part notation `database.schema.table` as long as you have the required permissions on the objects in the other database.
+ *Metadata Discovery* – When you're a consumer connected directly to a datashare database through the Redshift JDBC, ODBC, or Python drivers, you can view catalog data in the following ways: 
  + SQL [SHOW](https://docs.aws.amazon.com/redshift/latest/dg/r_SHOW.html) commands.
  + Querying information\$1schema tables and views.
  + Querying [SVV metadata views](https://docs.aws.amazon.com/redshift/latest/dg/svv_views.html).
+ *Permissions visibility* – Consumers can see the permissions granted to the datashares through the SHOW GRANTS SQL command.
+ *Cluster encryption management for data sharing* – To share data across an AWS account, both the producer and consumer cluster must be encrypted. 
  + If both the producer and consumer clusters and serverless namespaces are in the same account, they must have the same encryption type (either both unencrypted, or both encrypted). In every other case, including Lake Formation managed datashares, both the consumer and producer must be encrypted. This is for security purposes. However, they don't need to share the same encryption key.
  + To protect data in transit, all data is encrypted in transit through the encryption schema of the producer cluster. The consumer cluster adopts this encryption schema when data is loaded. The consumer cluster then operates as a normal encrypted cluster. Communications between the producer and consumer are also encrypted using a shared key schema. For more information about encryption in transit, [Encryption in transit](https://docs.aws.amazon.com/redshift/latest/mgmt/security-encryption-in-transit.html).

# Considerations for data sharing reads and writes in Amazon Redshift
Considerations for data sharing reads and writes

**Note**  
Amazon Redshift multi-warehouse writes using data sharing is only supported on Amazon Redshift patch 186 for provisioned clusters on current track version 1.0.78881 or greater, and for Amazon Redshift Serverless workgroups on version 1.0.78890 or greater. 

The following are considerations when working with datashare reads and writes in Amazon Redshift:
+ You can only share SQL UDFs through datashares. Python and Lambda UDFs aren't supported.
+ If the producer database has specific collation, use the same collation settings for the consumer database.
+ Amazon Redshift doesn't support nested SQL user-defined functions on producer clusters.
+ Amazon Redshift doesn't support sharing tables with interleaved sort keys and views that refer to tables with interleaved sort keys.
+ Amazon Redshift doesn't support accessing a datashare object which had a concurrent DDL occur between the Prepare and Execute of the access.
+ Amazon Redshift doesn't support sharing stored procedures through datashares.
+ Amazon Redshift doesn't support sharing metadata system views and system tables.
+ *Compute type* – You must use Serverless workgroups, ra3.large clusters, ra3.xlplus clusters, ra3.4xl clusters, or ra3.16xl clusters to use this feature. 
+ *Isolation level* – Your database’s isolation level must be snapshot isolation in order to allow other Serverless workgroups and provisioned clusters to write to it.
+ *Multi-statement queries and transactions* – Multi-statement queries outside of a transaction block aren't currently supported. As a result, if you are using a query editor like dbeaver and you have multiple write queries, you need to wrap your queries in an explicit BEGIN...END transaction statement.

  When multi-command statements are used outside of transactions, if the first command is a write to a producer database, subsequent write commands in the statement are only allowed to that producer database. If the first command is a read, subsequent write commands are only allowed to the used database, if set, otherwise to the local database. Note that the writes in a transaction are only supported to a single database.
+ *Consumer sizing* – Consumer clusters must have at least 64 slices or more to perform writes using data sharing. 
+ *Views and materialized views* – You can't create, update, or alter views or materialized views on a datashare database.
+ *Security* – You can't attach or remove security policies such as column-level (CLS), row-level (RLS) and dynamic data masking (DDM) to datashare objects.
+ *Manageability* – Consumers warehouses can't add datashare objects or views referencing datashare objects to another datashare. Consumers also can't modify or drop an existing datashare.
+ *Truncate operations* – Datashare writes support transactional truncates for remote tables. This is different than truncates that you run locally on a cluster, which are auto-commit. For more information about the SQL command, see [TRUNCATE](https://docs.aws.amazon.com/redshift/latest/dg/r_TRUNCATE.html).
+ *Cloning* – CREATE TABLE with LIKE clause statements support cloning from a single parent table when you write from consumer warehouses to producers.

# Considerations for data sharing in Amazon Redshift Serverless restore
Considerations for data sharing in serverless restore

Consider following when working with datashares during Amazon Redshift Serverless restore operations:

**Restore snapshot to a Amazon Redshift Serverless producer namespace:**
+ Restoring a snapshot to a Amazon Redshift Serverless producer namespace replaces the current datashares with the datashares in the snapshot. However, a datashare permission will only be maintained if ALL of the following conditions are met:
  + The snapshot is taken on the same namespace the customer is restoring to, AND
  + The datashare permission exists on the current namespace, AND
  + The datashare permission existed on the namespace when the snapshot was taken.
+ If a datashare was granted to a consumer when taking the snapshot, but was dropped or the grant was revoked on the latest namespace, the datashare will be restored. However, datashare permissions granted to consumer clusters are no longer valid after restore. To revive datashare access, re-grant usage permissions of datashares to desired consumer clusters.
+ If a datashare is granted to a consumer on the latest namespace, but was not granted when taking the snapshot, the datashare access won't be maintained after restore.
+ Datashare authorization/association scope won't change on the namespace after restore. For example, before restore if a datashare is associated to the entire AWS account, after restore the association scope will remain at account level regardless of the association scope when taking the snapshot.
+ Restoring from a snapshot will revert the publicly accessible setting to its state at the time the snapshot was created, regardless of any subsequent changes.

**Restore snapshot to a Amazon Redshift Serverless consumer namespace:**
+ Restoring a consumer namespace preserves datashare access without requiring the producer administrator to re-grant usage. However, if the database created from the datashare no longer exists after restore, the consumer must recreate it from the datashare.

# Considerations for data sharing with data lake tables in Amazon Redshift
Considerations for data sharing with data lake tables

The following are considerations when working with data lake tables in Amazon Redshift:
+ Data sharing of data lake tables does not support customer managed AWS KMS keys for Amazon S3 bucket encryption. You can use AWS managed keys for encryption. For more information, see [ Using server-side encryption with Amazon S3 managed keys (SSE-S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingServerSideEncryption.html) .
+ To data share data lake tables from an encrypted AWS Glue catalog, you must delegate AWS KMS operations to an IAM role by following the instructions in [Encrypting your Data Catalog](https://docs.aws.amazon.com/glue/latest/dg/encrypt-glue-data-catalog.html).
+  External tables that explicitly specify manifest files in the `LOCATION` clause aren't supported for data sharing. This includes the following tables that Amazon Redshift Spectrum supports: 
  +  Delta Lake 
  +  Hudi 

# Considerations for data sharing with AWS Lake Formation in Amazon Redshift
Considerations for data sharing with AWS Lake Formation

The following are considerations and limitations for sharing Amazon Redshift data with Lake Formation. For information on data sharing considerations and limitations, see [Considerations when using data sharing in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/dg/datashare-considerations.html). For information about Lake Formation limitations, see [Notes on working with Amazon Redshift datashares in Lake Formation](https://docs.aws.amazon.com/lake-formation/latest/dg/notes-rs-datashare.html).
+ Sharing a datashare to Lake Formation across Regions is currently unsupported.
+ If column-level filters are defined for a user on a shared relation, performing a `SELECT *` operation returns only the columns the user has access to.
+ Cell-level filters from Lake Formation are unsupported.
+ If you created and shared a view and its tables to Lake Formation, you can configure filters to manage access of the tables, Amazon Redshift enforces Lake Formation defined policies when consumer cluster users access shared objects. When a user accesses a view shared with Lake Formation, Redshift enforces only the Lake Formation policies defined on the view and not the tables contained within the view. However, when users directly access the table, Redshift enforces the defined Lake Formation policies on the table.
+ You can't create materialized views on the consumer based on a shared table if the table has Lake Formation filters configured.
+ The Lake Formation administrator must have [ data lake administrator](https://docs.aws.amazon.com/lake-formation/latest/dg/getting-started-setup.html#create-data-lake-admin) permissions and the [ required permissions to accept a datashare](https://docs.aws.amazon.com/lake-formation/latest/dg/redshift-ds-prereqs.html).
+ The producer consumer cluster must be an RA3 cluster with the latest Amazon Redshift cluster version or a serverless workgroup to share datashares via Lake Formation.
+ Both the producer and consumer clusters must be encrypted.
+ Redshift row-level and column-level access control policies implemented in the producer cluster or workgroup are ignored when the datashare is shared to Lake Formation. The Lake Formation administrator must configure these policies in Lake Formation. The producer cluster or workgroup administrator can turn off RLS for a table by using the [ALTER TABLE](https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_TABLE.html) command.
+ Sharing datashares via Lake Formation is only available to users who have access to both Redshift and Lake Formation.

# Considerations for data sharing with AWS Data Exchange in Amazon Redshift
Considerations for data sharing with AWS Data Exchange

When using AWS Data Exchange for Amazon Redshift, consider the following:
+ Both producers and consumers must use the RA3 instance types to use Amazon Redshift datashares. Producers must use the RA3 instance types with the latest Amazon Redshift cluster version.
+ Both the producer and consumer clusters must be encrypted.
+ You must be registered as an AWS Data Exchange provider to list products on AWS Data Exchange, including products that contain AWS Data Exchange datashares. For more information, see [Getting started as a provider](https://docs.aws.amazon.com/data-exchange/latest/userguide/provider-getting-started.html).
+ You don't need to be a registered AWS Data Exchange provider to find, subscribe to, and query Amazon Redshift data through AWS Data Exchange.
+ To control access to your data, create AWS Data Exchange datashares with the publicly accessible setting turned on. To alter an AWS Data Exchange datashare to turn off the publicly accessible setting, set the session variable to allow ALTER DATASHARE SET PUBLICACCESSIBLE FALSE. For more information, see [ALTER DATASHARE usage notes](r_ALTER_DATASHARE.md#r_ALTER_DATASHARE_usage).
+ Producers can't manually add or remove consumers from AWS Data Exchange datashares because access to the datashares is granted based on having an active subscription to an AWS Data Exchange product that contains the AWS Data Exchange datashare.
+ Producers can't view the SQL queries that consumers run. They can only view metadata, such as the number of queries or the objects consumers query, through Amazon Redshift tables that only the producer can access. For more information, see [Monitoring and auditing data sharing in Amazon Redshift](auditing.md).
+ We recommend that you make your datashares publicly accessible. If you don't, subscribers on AWS Data Exchange with publicly accessible consumer clusters won't be able to use your datashare.
+ We recommend that you don't delete an AWS Data Exchange datashare shared to other AWS accounts using the DROP DATASHARE statement. If you do, the AWS accounts that have access to the datashare will lose access. This action is irreversible. Performing this type of alteration can breach data product terms in AWS Data Exchange. If you want to delete an AWS Data Exchange datashare, see [DROP DATASHARE usage notes](r_DROP_DATASHARE.md#r_DROP_DATASHARE_usage).
+ For cross-Region data sharing, you can create AWS Data Exchange datashares to share licensed data.
+ When consuming data from a different Region, the consumer pays the Cross-Region data transfer fee from the producer Region to the consumer Region.

# Permissions you can grant to datashares


Different object types and various permissions you can grant to them in a data sharing context.

Databases:
+ CREATE
+ USAGE
+ ALTER
+ DROP

Schemas:
+ CREATE
+ USAGE
+ ALTER
+ DROP

Tables:
+ SELECT
+ INSERT
+ UPDATE
+ DELETE
+ TRUNCATE
+ DROP
+ ALTER
+ REFERENCES

Functions:
+ EXECUTE

# Supported SQL statements for data sharing writes on consumers


The following Data Definition Language (DDL) statements are supported for data sharing with writes:
+ ALTER TABLE RENAME TO
+ ALTER TABLE RENAME COLUMN TO
+ ALTER TABLE ADD/DROP COLUMN
+ ALTER SCHEMA RENAME TO
+ \$1 CREATE \$1 DROP \$1 SCHEMA
+ \$1 CREATE \$1 DROP \$1 SHOW \$1 TABLE
+ CREATE TABLE table\$1name AS
+ BEGIN \$1 START TRANSACTION
+ END \$1 COMMIT \$1 ROLLBACK
+ TRUNCATE

The following Data Manipulation Language (DML) statements are supported for data sharing with writes:
+ SELECT
+ INSERT
+ INSERT INTO SELECT
+ UPDATE
+ DELETE
+ MERGE
+ COPY without COMPUPDATE

The following analyze statements are supported for data sharing with writes:
+ ANALYZE. The consumer runs USER ANALYZE and sends the stats to the producer. 
+ Analyze activated by CTAS/COPY running on the consumer. This iIncludes multiplexing for small consumers.
+ Auto-analyze run on the producer after COPY.

The following permission statements are supported for data sharing with writes:
+ \$1 GRANT \$1 REVOKE \$1 privilege\$1name ON OBJECT\$1TYPE object\$1name TO consumer\$1user 
+ SHOW GRANTS. For more information, see [SHOW GRANTS](r_SHOW_GRANTS.md).

# Unsupported SQL statements for data sharing writes on consumers


The following aren't supported:
+ Multi-statement queries to consumer warehouses when writing to producers.
+ Multi-statement queries to consumer warehouses in a different database, if the previous command is a read statement.
+ Object references other than three-dot notations, such as one.dot or two.dot notations, if not connected to shared database.
+ Concurrency scaling queries writing from consumers to producers.
+ Auto-copy jobs writing from consumers to producers.
+ Streaming jobs writing from consumers to producers.
+ Consumers creating zero-ETL integration tables on producer clusters. For more information about zero-ETL integrations, see [Working with zero-ETL integrations](https://docs.aws.amazon.com/redshift/latest/mgmt/zero-etl-using.html).
+ Writing to a table with an interleaved sort key through a data share.
+ Writing to a stored procedure through a data share.
+ Writing to a SQL user-defined functions (UDF) through a data share. These include nested, Python, and Lambda UDFs.
+ UPDATE, INSERT, or COPY statements on identity columns to consumer warehouses with more compute slices than producer. 
+ MERGE statements on non-RMS external tables to consumer warehouses, when writing to producers.
+ CREATE TABLE statements with:
  + DEFAULT expression set to data type VARBYTE. The VARBYTE data type can't be implicitly converted to any other data type. For more information, see [CAST function](r_CAST_function.md).
  + AS OF SELECT statements with NULL parameter to consumer warehouses, when writing to producers. 

# AWS Regions where data sharing is available
Available AWS Regions

The following table lists availability for both data sharing read and write capabilities.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/data_sharing_regions.html)

# Getting started with data sharing in Amazon Redshift
Getting started

You can get started with data sharing by following one of the guides in this section.

**Topics**
+ [

# Getting started with read-only data sharing in the console
](getting-started-datashare-read-only.md)
+ [

# Getting started with read-only data sharing with the SQL interface
](getting-started-datashare-sql.md)
+ [

# Getting started with multi-warehouse writes using data sharing in Amazon Redshift
](getting-started-datashare-writes.md)
+ [

# Getting started with data sharing with AWS CloudFormation in Amazon Redshift
](data-sharing-within-account-CF.md)

# Getting started with read-only data sharing in the console


With Amazon Redshift, you can manage data sharing with writes using the console to control access and govern data across Amazon Redshift clusters and AWS accounts. The following sections provide step-by-step instructions on configuring and managing data sharing with writes using the Amazon Redshift console.

Use the Amazon Redshift console to manage datashares created in your account or shared from other accounts.

You need permissions to create, edit, or delete datashares. For information, see [Managing permissions for a datashares in Amazon Redshift](writes-managing-permissions.md).
+ If you are a producer administrator, you can create datashares, add data consumers, add datashare objects, create databases from datashares, edit datashares, or delete datashares from the **CLUSTERS** tab. 

  From the navigation menu, navigate the **Clusters** tab, choose a cluster from the cluster list. Then do one of the following:
  + Choose the **Datashares** tab, choose a datashare from the **Datashares created in my namespace** section. Then do one of the following:
    + [Create a datashare](datashare-creation.md#create-datashare-console)

      When a datashare is created, you can add datashare objects or data consumers. For more information, see [Add datashare objects to datashares](datashare-creation.md#add-datashare-object-console) and [Add data consumers to datashares](datashare-creation.md#add-data-consumer-console).
    + [Editing datashares created in your account](manage-datashare-existing-console.md#edit-datashare-console)
    + [Deleting a datashare created in your account](manage-datashare-existing-console.md#delete-datashare-console)
  + Choose **Datashares** and choose a datashare from the **Datashares from other clusters** section. Then do one of the following:
    + [Create a datashare](datashare-creation.md#create-datashare-console)
    + [Creating databases from datashares](query-datashare-console.md#create-database-from-datashare-console)
  + Choose **Databases** and choose a database from the **Databases** section. Then choose **Create datashare**. For more information, see [Creating databases from datashares](query-datashare-console.md#create-database-from-datashare-console).

**Note**  
To view databases and objects within databases or to view datashares in the cluster, connect to a database. For more information, see [Connecting to a database](connect-database-console.md).

# Connecting to a database


With Amazon Redshift, you can establish a connection to your data warehouse cluster and execute SQL queries, load data, or perform administrative tasks. Connecting to a database refers to the process of creating a secure channel between a client application or tool and the Amazon Redshift cluster. The following sections will provide step-by-step instructions on how to connect to an Amazon Redshift database.

Connect to a database to view databases and objects within databases in this cluster or to view datashares.

The user credentials used to connect to a specified database must have the necessary permissions to view all datashares.

If there is no local connection, do one of the following:
+ In the cluster details page, from the **Databases** tab, in the **Databases** or **Datashare objects** section, choose **Connect to database** to view database objects in the cluster.
+ In the cluster details page, from the **Datashares** tab, do one of the following:
  + In the **Datashares from other clusters** section, choose **Connect to database** to view datashares from other clusters.
  + In the **Datashares created in my cluster** section, choose **Connect to database** to view datashares in your cluster.
+ On the **Connect to database** window, do one of the following:
  + If you choose **Create a new connection**, choose **AWS Secrets Manager** to use a stored secret to authenticate access for the connection. 

    Or, choose **Temporary credentials** to use database credentials to authenticate access for the connection. Specify values for **Database name** and **Database user**.

    Choose **Connect**.
  + Choose **Use a recent connection** to connect to another database that you have the necessary permissions.

    Amazon Redshift automatically makes the connection.

After a database connection is established, you can start creating datashares, querying datashares, or creating databases from datashares.

# Creating datashares


With Amazon Redshift, you can share live data across Amazon Redshift clusters or AWS accounts using datashares. A datashare is a consumer-producer object that allows you to share live data from your Amazon Redshift cluster with other clusters or AWS accounts. Creating datashares enables secure data sharing while maintaining control over access and ensuring data remains up-to-date. The following sections provide details on creating datashares and adding database objects such as schemas, tables, and views to share live data securely.

## Create a datashare


A datashare is a logical container of database objects, permissions, and consumers. Consumers are Amazon Redshift provisioned clusters or Amazon Redshift Serverless namespaces in your account and other AWS accounts. Each datashare is associated with the database it's created in and only objects from that database can be added. As a producer administrator, you can create datashares on the console and with SQL by following one of the below procedures.

------
#### [ Console ]

On the console, you can create datashares from the **Datashares** tabs in the cluster or namespace details page. After the datashare is created, you can create databases from the datashare on a consumer as a consumer administrator.

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Clusters**, then choose your cluster. The cluster details page appears.

1. In the cluster or namespace details page, from the **Datashares** tab, in the **Datashares** section, connect to a database if you don't have a database connection. In the **Datashares created in my account** section, choose **Create datashare**. The **Create datashare** page appears.

1. Choose **Create datashare**. You can only create a datashare from a local database. If you haven't connected to the database, the **Connect to database** page appears. Follow the steps in [Connecting to a database](connect-database-console.md) to connect to a database. If there is a recent connection, the **Create datashare** page appears. 

1. In the **Datashare information** section, choose one of the following:
   + Choose **Datashare** to create datashares to share data for read or write purpose across different Amazon Redshift data warehouses (provisioned clusters or Serverless endpoints) or in the same AWS account or different AWS accounts.
   + Choose **AWS Data Exchange datashare** to create datashares to license your data through AWS Data Exchange.

1. Specify values for **Datashare name**, **Database name**, and **Publicly accessible**. When you change the database name, make a new database connection.

1. Add objects to your datashare either using the **Scoped permissions** or **Direct permissions** sections. To add objects to a datashare, see [Creating a datashare in Amazon Redshift](writes-creating-datashare.md). 

1. In the **Data consumers** section, you can choose to publish to Amazon Redshift, or publish to the AWS Glue Data Catalog, which starts the process of sharing data with Lake Formation. Publishing your datashare to Amazon Redshift means sharing your data with another namespace or Amazon Redshift account that acts as the consumer.
**Note**  
Once the datashare is created, you can't edit the configuration to publish to the other option.

1. Choose **Create datashare**.

------
#### [ SQL ]

The following command creates a datashare:

```
CREATE DATASHARE salesshare;
```

At the time of datashare creation, each datashare is associated with a database. Only objects from that database can be shared in that datashare. Multiple datashares can be created on the same database with the same or different granularity of objects. There is no limit on the number of datashares a cluster can create. You can also use the Amazon Redshift console to create datashares. For more information, see [CREATE DATASHARE](r_CREATE_DATASHARE.md).

You can also control security restrictions to the datashare during creation. The following example shows that the consumer with a public IP access is allowed to read the datashare.

```
CREATE DATASHARE my_datashare [PUBLICACCESSIBLE = TRUE];
```

Setting PUBLICACCESSIBLE = TRUE allows consumers to query your datashare from publicly accessible clusters and provisioned workgroups. Leave this out or explicitly set it to false if you do not want to allow it.

You can modify properties about the type of consumers after you create a datashare. For example, you can define that clusters that want to consume data from a given datashare can't be publicly accessible. Queries from consumer clusters that don't meet security restrictions specified in datashare are rejected at query runtime. For more information, see [ALTER DATASHARE](r_ALTER_DATASHARE.md).

------

## Add datashare objects to datashares


You can add database objects of various types on the console and with SQL by following one of the below procedures. 

------
#### [ Console ]

You can add objects to your datashare either using **Scoped permissions** or **Direct permissions** sections. Select either **Grant scoped permissions** or **Grant direct permissions** to add objects. Select the **Add** button to add objects. A dialog appears. Perform the following steps:

1. If you select **Grant scoped permissions**, the **Grant scoped permissions** page appears where you can grant scoped permissions at either a database or a schema level. Datashares with scoped permissions have the specified permissions on all current and future objects within the database or schema. For more details see, [Scoped permissions](t_scoped-permissions.md).

   1. Next, select **Database scoped permissions** to grant scoped permissions at the database level. When you grant scoped permissions, they apply to the current database while creating the datashare. These permissions can’t be granted to individual objects and are applicable to both existing and new objects (schemas, tables, views, UDFs).

   1. Select the scoped permission(s) for the schemas, table or views or user-defined functions. This means all objects in the database have the selected permissions granted to consumers. Select **Grant** to complete granting database scoped permissions. 

   1. Then, select **Schema scoped permissions** to grant scoped permissions at the schema level. When you grant schema-scoped permissions, all objects added to the schema have the specified datashare permissions. 

   1. Select the schema you want to add to the datashare from the dropdown. You can select only a single schema at a time. Then, select direct permission(s) you want to grant on the selected schema.

   1. Select scoped permission(s) for your schema objects such as tables, views and user-defined functions. Permissions are granted on all matching objects in the schema. These can be either existing objects or those added in the future. When it's applied, you can't remove a permission from an object without revoking the scoped permissions. 

   1. Select **Grant** to complete granting schema scoped permissions. 

1. If you select **Grant direct permissions**, the **Grant direct permissions** page appears where you can grant direct permissions at each objects level such as schema, table, view or user-defined function. To grant direct permissions, you must first add the relevant schemas to the datashare.

   1. Next, select **Grant direct permissions** to schemas to apply direct permissions on specific schema. Then, select schema permission(s) for your schema objects such as tables, views and user-defined functions and select the schema you want added to the datashare. Select **Grant** to complete adding schema to the datashare. 

   1. After you have a schema added to your datashare, you can proceed with adding direct permissions for your schema objects. Select **Grant direct permissions ** again. The **Grant direct permissions** page appears. Then, navigate to the direct permissions tabs for schema objects.

   1. Select **Grant direct permissions to tables and views** to grant object level direct permissions on these objects. Select the required direct permission(s) and the required objects from the list. Use the search field to find datashare objects. Select Grant to complete adding tables and views to the datashare. 

   1. Select **Grant direct permissions to user-defined functions** to grant object level direct permissions on user-defined functions. Select the required direct permission(s) and the required objects from the list. Use the search field to find datashare objects. Select **Grant** to complete adding user-defined functions to the datashare. 

1. You can also choose whether you want to **Add future objects**. When you choose to include datashare objects added to the schema, it means that objects added to the schema are added to the datashare automatically.

1. Choose **Add** to complete the section and add the objects. They're listed under the **Datashare objects**.

1. After you add objects, you can select individual objects and edit their permissions. If you select a schema, a dialog appears that asks if you would like to add **Scoped permissions**. This makes it so each existing or added object to the schema has a pre-selected set of permissions, appropriate for the object type. For instance, the administrator can set that all added tables have SELECT and UPDATE permissions, for instance.

1. All datashare objects are listed under the **Scoped permissions** or **Direct permissions** sections.

1. In the **Data consumers** section, you can add namespaces or add AWS accounts as consumers of the datashare. 

1. Choose **Create datashare** to save your changes.

After you create the datashare, it appears in the list under **Datashares created in my namespace**. If you choose a datashare from the list, you can view its consumers, its objects, and other properties.

------
#### [ SQL ]

With SQL, the datashare owner must grant USAGE on the schemas they want to add to the datashare. The GRANT is used to grant various actions on the schema, including CREATE and USAGE. The schemas hold shared objects:

```
CREATE SCHEMA myshared_schema1;
CREATE SCHEMA myshared_schema2;
 
GRANT USAGE ON SCHEMA myshared_schema1 TO DATASHARE my_datashare;
GRANT CREATE, USAGE ON SCHEMA myshared_schema2 TO DATASHARE my_datashare;
```

Alternatively, the administrator can continue to run ALTER commands to add a schema to the datashare. Only USAGE permissions are granted when a schema is added this way.

```
ALTER DATASHARE my_datashare ADD SCHEMA myshared_schema1;
```

After the administrator adds schemas, they can grant datashare permissions on objects in the schema. These can be both read and write permissions. The GRANT ALL sample shows how to grant all permissions.

```
GRANT SELECT, INSERT ON TABLE myshared_schema1.table1, myshared_schema1.table2, myshared_schema2.table1
TO DATASHARE my_datashare;
                     
GRANT ALL ON TABLE myshared_schema1.table4 TO DATASHARE my_datashare;
```

You can continue to run commands like ALTER DATASHARE to add tables. When you do, only SELECT permissions are granted on the objects added.

```
ALTER DATASHARE my_datashare ADD TABLE myshared_schema1.table1, myshared_schema1.table2, myshared_schema2.table1;
```

You can grant scoped permissions to a datashare on all objects of a type within a database or schema. Datashares with scoped permissions have the specified permissions on all current and future objects within the database or schema.

 You can view the scope of database-level scoped permissions in [SVV\$1DATABASE\$1PRIVILEGES](r_SVV_DATABASE_PRIVILEGES.md). You can view the scope of schema-level scoped permissions in [SVV\$1SCHEMA\$1PRIVILEGES](r_SVV_SCHEMA_PRIVILEGES.md).

The following is the syntax for granting scoped permissions to datashares. For more information about scoped permissions, see [Scoped permissions](t_scoped-permissions.md).

```
GRANT { CREATE | USAGE | ALTER | DROP } [,...] | ALL [ PRIVILEGES ] }FOR SCHEMAS IN
DATABASE db_name 
TO DATASHARE { datashare_name}

GRANT { { SELECT | INSERT | UPDATE | DELETE | DROP | ALTER | TRUNCATE | REFERENCES } [, ...] } | ALL [PRIVILEGES] } }FOR TABLES IN
{SCHEMA schema_name [DATABASE db_name ] | DATABASE db_name }
TO DATASHARE { datashare_name}

GRANT { EXECUTE | ALL [ PRIVILEGES ] }FOR FUNCTIONS IN
{SCHEMA schema_name [DATABASE db_name ] | DATABASE db_name }
TO DATASHARE { datashare_name}
```

------

## Add data consumers to datashares


You can add one or more data consumers to the datashares. Data consumers can be namespaces that uniquely identified Amazon Redshift clusters or AWS accounts.

You must explicitly choose to turn off or turn on sharing your datashare to clusters with public access.
+ Choose **Add namespaces to the datashare**. Namespaces are globally unique identifier (GUID) for Amazon Redshift cluster.
+ Choose **Add AWS accounts** to the datashare. The specified AWS accounts must have access permissions to the datashare.

# Authorizing or removing authorization from datashares


As a producer administrator, choose which data consumers to authorize to access datashares or to remove authorization from. Authorized data consumers receive notifications to take actions on datashares. If you are adding a namespace as a data consumer, you don't have to perform authorization.

Prerequisite: To authorize or remove authorization for the datashare, there must be at least one data consumer added to the datashare.

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Datashares**. The datashare list page appears.

1. Choose **In my account**.

1. In the **Datashares in my account** section, do one of the following:
   + Choose one or more consumer clusters that you want to authorize. The Authorize data consumers page appears. Then choose **Authorize**.

     If you chose **Publish to AWS Glue Data Catalog** when creating the datashare, you can only grant authorization of the datashare to a Lake Formation account.

     For AWS Data Exchange datashare, you can only authorize one datashare at a time.

     When you authorize an AWS Data Exchange datashare, you are sharing the datashare with the AWS Data Exchange service and allowing AWS Data Exchange to manage access to the datashare on your behalf. AWS Data Exchange allows access to consumers by adding consumer accounts as data consumers to the AWS Data Exchange datashare when they subscribe to the products. AWS Data Exchange doesn't have read access to the datashare.
   + Choose one or more consumer clusters that you want to remove authorization from. Then choose **Remove authorization**.

After data consumers are authorized, they can access datashare objects and create a consumer database to query the data. 

After authorization is removed, data consumers lose access to the datashare immediately.

# Managing datashares from other accounts as a consumer


## Removing association of datashare from data consumers


As a consumer administrator, you can remove association of datashares from data consumers.

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Datashares**. The datashare list page appears.

1. Choose **From other accounts**.

1. In the **Datashares from other accounts** section, choose the datashare to remove association from data consumers.

1. In the **Data consumers** section, choose one or more data consumers to remove association from. Then choose **Remove association**.

1. When the Remove association page appears, choose **Remove association**.

After association is removed, data consumers will lose access to the datashare. You can change the data consumer association at any time.

## Declining datashares


As a consumer administrator, you can reject any datashare whose state is [available or active](https://docs.aws.amazon.com/redshift/latest/dg/access-cross-account.html#manage-status). After you reject a datashare, consumer cluster users lose access to the datashare. Amazon Redshift doesn't return the rejected datashare if you call the `DescribeDataSharesForConsumer` API operation. If the producer administrator runs the `DescribeDataSharesForProducer` API operation, they will see that the datashare was rejected. Once a datashare is rejected, the producer administrator can authorize the datashare to a consumer cluster again, and the consumer administrator can choose to associate their AWS account with the datashare or reject it. 

If your AWS account has an association to a datashare and a pending association to a datashare that's managed by Lake Formation, rejecting the datashare association that's managed by Lake Formation also rejects the original datashare. To reject a specific association, the producer administrator can remove authorization from a specified datashare. This action doesn't affect other datashares.

To reject a datashare, use the AWS console, the API operation `RejectDataShare`, or `reject-datashare` in the AWS CLI.

**To reject a datashare using the AWS console:**

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. In the navigation menu, choose **Datashares**.

1. Choose **From other accounts**.

1. In the **Datashares from other accounts** section, choose the datashare you want to decline. When the **Decline datashare** page appears, choose **Decline**.

After you decline the datashares, you can't revert the change. Amazon Redshift removes the datashares from the list. To see the datashare again, the producer administrator must authorize it again.

# Managing existing datashares


With Amazon Redshift, you can manage existing datashares to control access to your data in an Amazon Redshift cluster. The following sections provide step-by-step guidance on managing datashares in your Amazon Redshift environment.

## Viewing datashares


View datashares from the **DATASHARES** or **CLUSTERS** tab.
+ Use the **DATASHARES** tab to list datashares in your account or from other accounts.
  + To view datashares created in your account, choose **In my account**, then choose the datashare you want to view.
  + To view datashares that are shared from other accounts, choose **From other accounts**, then choose the datashare you want to view.
+ Use the **CLUSTERS** tab to list datashares in your cluster or from other clusters.

  Connect to a database. For more information, see [Connecting to a database](connect-database-console.md).

  Then choose a datashare either from the **Datashares from other clusters** or **Datashares created in my cluster** section to view its details.

## Removing datashare objects from datashares


You can remove one or more objects from a datashare by using the following procedure.

**To remove one or more objects from a datashare**

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Clusters**, then choose your cluster. The cluster details page appears.

1. Choose **Datashares**.

1. In the **Datashares created in my account** section, choose **Connect to database**. For more information, see [Connecting to a database](connect-database-console.md).

1. Choose the datashare you want to edit, then choose **Edit**. The datashare details page appears.

1. To remove one or more datashare objects to the datashare, do one of the following:
   + To remove schemas from the datashare, choose one or more schemas. Then choose **Remove**. Amazon Redshift removes the specified schemas and all the objects of the specified schemas from the datashare.
   + To remove tables and views from the datashare, choose one or more tables and views. Then choose **Remove**. Alternatively, choose **Remove by schema** to remove all tables and views in the specified schemas.
   + To remove user-defined functions from the datashare, choose one or more user-defined functions. Then choose **Remove**. Alternatively, choose **Remove by schema** to remove all user-defined functions in the specified schemas.

## Removing data consumers from datashares


You can remove one or more data consumers from a datashare. Data consumers can be namespaces that uniquely identified Amazon Redshift clusters or AWS accounts.

Choose one or more data consumers either from the namespace IDs or AWS account, then choose **Remove**.

Amazon Redshift removes the specified data consumers from the datashare. They lose access to the datashare immediately.

## Editing datashares created in your account


Edit datashares created in your account using the console. Connect to a database first to see the list of datashares created in your account.

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Clusters**, then choose your cluster. The cluster details page appears.

1. Choose **Datashares**.

1. In the **Datashares created in my account** section, choose **Connect to database**. For more information, see [Connecting to a database](connect-database-console.md).

1. Choose the datashare you want to edit, then choose **Edit**. The datashare details page appears.

1. Make any changes in the **Datashare objects** or **Data consumers** section.
**Note**  
If you chose to publish your datashare to the AWS Glue Data Catalog, you can't edit the configuration to publish the datashare to other Amazon Redshift accounts.

1. Choose **Save changes**.

Amazon Redshift updates your datashare with the changes.

## Deleting a datashare created in your account


Delete datashares created in your account using the console. Connect to a database first to see the list of datashares created in your account.

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Clusters**, then choose your cluster. The cluster details page appears.

1. Choose **Datashares**. The datashare list appears.

1. In the **Datashares created in my account** section, choose **Connect to database**. For more information, see [Connecting to a database](connect-database-console.md).

1. Choose one or more datashares you want to delete, then choose **Delete**. The Delete datashares page appears.

   Deleting a datashare shared with Lake Formation doesn't automatically remove the associated permissions in Lake Formation. To remove them, go to the Lake Formation console.

1. Type **Delete** to confirm deleting the specified datashares.

1. Choose **Delete**.

After datashares are deleted, datashare consumers lose access to the datashares.

# Querying datashares


With Amazon Redshift, you can query data across datashares from producer clusters to securely access live data without copying or transferring it. The following sections cover the details of setting up and querying datashares in your Amazon Redshift environment.

## Creating databases from datashares


To start querying data in the datashare, create a database from a datashare. You can create only one database from a specified datashare.

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Clusters**, then choose your cluster. The cluster details page appears.

1. Choose **Datashares**. The datashare list appears.

1. In the **Datashares from other clusters** section, choose **Connect to database**. For more information, see [Connecting to a database](connect-database-console.md).

1. Choose a datashare that you want to create databases from, then choose **Create database from datashare**. The Create database from datashare page appears.

1. In the **Database name**, specify a database name. The database name must be 1–64 alphanumeric characters (lowercase only) and it can't be a reserved word.

1. Choose **Create**.

After the database is created, you can query data in the database.

# Managing AWS Data Exchange datashares


With Amazon Redshift, you can securely share and receive live data from AWS Data Exchange without having to create and manage data extracts or pipelines. Managing AWS Data Exchange datashares allows you to subscribe to third-party data products and integrate live data streams directly into your Amazon Redshift data warehouse. The following sections demonstrate managing AWS Data Exchange datashares within your Amazon Redshift clusters.

## Creating data sets on AWS Data Exchange


Create data sets on AWS Data Exchange.

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Clusters**, then choose your cluster. The cluster details page appears.

1. Choose **Datashares**.

1. In the **Datashares created in my account** section, choose an AWS Data Exchange datashare.

1. Choose **Create data set on AWS Data Exchange**. For more information, see [Publishing a new product](https://docs.aws.amazon.com/data-exchange/latest/userguide/publishing-products.html).

## Editing AWS Data Exchange datashares


Edit AWS Data Exchange datashares using the console. Connect to a database first to see the list of datashares created in your account.

For AWS Data Exchange datashares, you can't make changes to data consumers.

To edit the publicly accessible setting for AWS Data Exchange datashares, use the Query editor v2. Amazon Redshift generates a random one-time value to set the session variable to allow turning this setting off. For more information, see [ALTER DATASHARE usage notes](r_ALTER_DATASHARE.md#r_ALTER_DATASHARE_usage).

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Clusters**, then choose your cluster. The cluster details page appears.

1. From the navigator menu, choose **Editor**, then **Query editor v2**.

1. If this is the first time you use the Query editor v2, configure your AWS account. By default, an AWS owned key is used to encrypt resources. For more information about configuring your AWS account, see [Configuring your AWS account](https://docs.aws.amazon.com/redshift/latest/mgmt/query-editor-v2-getting-started.html) in the *Amazon Redshift Management Guide*.

1. To connect to the cluster that your AWS Data Exchange datashare is in, choose **Database** and the cluster name in the tree-view panel. If prompted, enter the connection parameters.

1. Copy the following SQL statement. The following example changes the publicly accessible setting of the salesshare datashare.

   ```
   ALTER DATASHARE salesshare SET PUBLICACCESSIBLE FALSE;
   ```

1. To run the copied SQL statement, choose **Queries** and paste the copied SQL statement in the query area. Then choose **Run**.

   An error appears following:

   ```
   ALTER DATASHARE salesshare SET PUBLICACCESSIBLE FALSE;
   ERROR:  Alter of ADX-managed datashare salesshare requires session variable datashare_break_glass_session_var to be set to value 'c670ba4db22f4b'
   ```

   The value 'c670ba4db22f4b' is a random one-time value that Amazon Redshift generates when an unrecommended operation occurs.

1. Copy and paste the following sample statement into the query area. Then run the command. The `SET datashare_break_glass_session_var` command applies a permission to allow an unrecommended operation for an AWS Data Exchange datashare.

   ```
   SET datashare_break_glass_session_var to 'c670ba4db22f4b';
   ```

1. Run the ALTER DATASHARE statement again.

   ```
   ALTER DATASHARE salesshare;
   ```

Amazon Redshift updates your datashare with the changes.

## Deleting AWS Data Exchange datashares created in your account


Delete AWS Data Exchange datashares created in your account using the console. Connect to a database first to see the list of datashares created in your account.

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Clusters**, then choose your cluster. The cluster details page appears.

1. From the navigator menu, choose **Editor**, then **Query editor v2**.

1. If this is the first time you use the Query editor v2, configure your AWS account. By default, an AWS owned key is used to encrypt resources. For more information about configuring your AWS account, see [Configuring your AWS account](https://docs.aws.amazon.com/redshift/latest/mgmt/query-editor-v2-getting-started.html) in the *Amazon Redshift Management Guide*.

1. To connect to the cluster that your AWS Data Exchange datashare is in, choose **Database** and the cluster name in the tree-view panel. If prompted, enter the connection parameters.

1. Copy the following SQL statement. The following example drops the salesshare datashare.

   ```
   DROP DATASHARE salesshare
   ```

1. To run the copied SQL statement, choose **Queries** and paste the copied SQL statement in the query area. Then choose **Run**.

   An error appears following:

   ```
   ERROR:  Drop of ADX-managed datashare salesshare requires session variable datashare_break_glass_session_var to be set to value '620c871f890c49'
   ```

   The value '620c871f890c49' is a random one-time value that Amazon Redshift generates when an unrecommended operation occurs.

1. Copy and paste the following sample statement into the query area. Then run the command. The `SET datashare_break_glass_session_var` command applies a permission to allow an unrecommended operation for an AWS Data Exchange datashare.

   ```
   SET datashare_break_glass_session_var to '620c871f890c49';
   ```

1. Run the DROP DATASHARE statement again.

   ```
   DROP DATASHARE salesshare;
   ```

After the datashare is deleted, datashare consumers lose access to the datashare. 

Deleting a shared AWS Data Exchange datashare can breach data product terms in AWS Data Exchange.

# Getting started with read-only data sharing with the SQL interface


With Amazon Redshift, you can securely share data across Amazon Redshift clusters, enabling data consumers to query and access live data without copying or replicating it. Data sharing lets you create and configure datashares, which are producer-side objects that reference the database objects you want to share.

You can share data for read purposes across different Amazon Redshift clusters within or across AWS accounts, or across AWS Regions.

**Topics**
+ [

# Sharing read access to data within an AWS account
](within-account.md)
+ [

# Working with views in Amazon Redshift data sharing
](datashare-views.md)
+ [

# Adding data lake tables to a datashare
](create-datashare-external-views.md)
+ [

# Sharing data across AWS accounts
](across-account.md)
+ [

# Sharing data across AWS Regions
](across-region.md)
+ [

# Sharing licensed Amazon Redshift data on AWS Data Exchange
](adx-getting-started.md)
+ [

# Getting started with AWS Lake Formation-managed datashares
](lf-getting-started.md)

# Sharing read access to data within an AWS account


With Amazon Redshift, you can share read access to data across different database users or groups within the same AWS account. This feature allows you to control data access privileges at a granular level, ensuring that only authorized users or groups can read specific data sets. 

## Share data for read purposes as a producer administrator or database owner


1. Create datashares in your cluster. For more information, see [CREATE DATASHARE](r_CREATE_DATASHARE.md).

   ```
   CREATE DATASHARE salesshare;
   ```

   Cluster superuser and database owners can create datashares. Each datashare is associated with a database during creation. Only objects from that database can be shared in that datashare. Multiple datashares can be created on the same database with the same or different granularity of objects. There is no limit on the number of datashares a cluster can create.

   You can also use the Amazon Redshift console to create datashares. For more information, see [Create a datashare](datashare-creation.md#create-datashare-console).

1. Delegate permissions to operate on the datashare. For more information, see [GRANT](r_GRANT.md) or [REVOKE](r_REVOKE.md).

   The following example grants permissions to `dbuser` on `salesshare`.

   ```
   GRANT ALTER, SHARE ON DATASHARE salesshare TO dbuser;
   ```

   Cluster superusers and the owners of the datashare can grant or revoke modification permissions on the datashare to additional users.

1. Add objects to or remove objects from datashares. To add objects to a datashare, add the schema before adding objects. When you add a schema, Amazon Redshift doesn't add all the objects under it. Make sure to add these explicitly. For more information, see [ALTER DATASHARE](r_ALTER_DATASHARE.md).

   ```
   ALTER DATASHARE salesshare ADD SCHEMA PUBLIC;
   ALTER DATASHARE salesshare ADD TABLE public.tickit_sales_redshift;
   ALTER DATASHARE salesshare ADD ALL TABLES IN SCHEMA PUBLIC;
   ```

   You can also add views to a datashare.

   ```
   CREATE VIEW public.sales_data_summary_view AS SELECT * FROM public.tickit_sales_redshift;
   ALTER DATASHARE salesshare ADD VIEW public.sales_data_summary_view;
   ```

   Use ALTER DATASHARE to share schemas, and tables, views, and functions in a given schema. Superusers, datashare owners, or users who have ALTER or ALL permission on the datashare can alter the datashare to add objects to or remove objects from it. Users should have the permissions to add or remove objects from the datashare. Users should also be the owners of the objects or have SELECT, USAGE, or ALL permissions on the objects.

   You can also use GRANT to add objects to the datashare. This example shows how:

   ```
   GRANT SELECT ON TABLE public.tickit_sales_redshift TO DATASHARE salesshare;
   ```

   This syntax is functionally equivalent to `ALTER DATASHARE salesshare ADD TABLE public.tickit_sales_redshift;`.

   Use the INCLUDENEW clause to add any new tables, views, or SQL user-defined functions (UDFs) created in a specified schema to the datashare. Only superusers can change this property for each datashare-schema pair.

   ```
   ALTER DATASHARE salesshare ADD SCHEMA PUBLIC;
   ALTER DATASHARE salesshare SET INCLUDENEW = TRUE FOR SCHEMA PUBLIC;
   ```

   You can also use the Amazon Redshift console to add or remove objects from datashares. For more information, see [Add datashare objects to datashares](datashare-creation.md#add-datashare-object-console), [Removing datashare objects from datashares](manage-datashare-existing-console.md#remove-datashare-object-console), and [Editing datashares created in your account](manage-datashare-existing-console.md#edit-datashare-console).

1. Add consumers to or remove consumers from datashares. The following example adds the consumer namespace to `salesshare`. The namespace is the namespace globally unique identifier (GUID) of the consumer cluster in the account. For more information, see [GRANT](r_GRANT.md) or [REVOKE](r_REVOKE.md).

   ```
   GRANT USAGE ON DATASHARE salesshare TO NAMESPACE '13b8833d-17c6-4f16-8fe4-1a018f5ed00d';
   ```

   You can only grant permissions to one datashare consumer in a GRANT statement.

   Cluster superusers and the owners of datashare objects or users that have SHARE permission on the datashare can add consumers to or remove consumers from a datashare. To do so, they use GRANT USAGE or REVOKE USAGE.

   To find the namespace of the cluster that you currently see, you can use the SELECT CURRENT\$1NAMESPACE command. To find the namespace of a different cluster within the same AWS account, go to the Amazon Redshift console cluster details page. On that page, find the newly added namespace field.

   You can also use the Amazon Redshift console to add or remove data consumers for datashares. For more information, see [Add data consumers to datashares](datashare-creation.md#add-data-consumer-console) and [Removing data consumers from datashares](manage-datashare-existing-console.md#remove-data-consumer-console).

1. (Optional) Add security restrictions to the datashare. The following example shows that the consumer cluster with a public IP access is allowed to read the datashare. For more information, see [ALTER DATASHARE](r_ALTER_DATASHARE.md).

   ```
   ALTER DATASHARE salesshare SET PUBLICACCESSIBLE = TRUE;
   ```

   You can modify properties about the type of consumers after datashare creation. For example, you can define that clusters that want to consume data from a given datashare can't be publicly accessible. Queries from consumer clusters that don't meet security restrictions specified in datashare are rejected at query runtime.

   You can also use the Amazon Redshift console to edit datashares. For more information, see [Editing datashares created in your account](manage-datashare-existing-console.md#edit-datashare-console).

1. List datashares created in the cluster and look into the contents of the datashare.

   The following example displays the information of a datashare named `salesshare`. 

   ```
   DESC DATASHARE salesshare;
                  
    producer_account  |          producer_namespace          | share_type | share_name | object_type |           object_name          |   include_new
   -------------------+--------------------------------------+------------+------------+-------------+--------------------------------+-------------------
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | table       | public.tickit_users_redshift   |   
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | table       | public.tickit_venue_redshift   |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | table       | public.tickit_category_redshift|
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | table       | public.tickit_date_redshift    |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | table       | public.tickit_event_redshift   |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | table       | public.tickit_listing_redshift |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | table       | public.tickit_sales_redshift   |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | schema      | public                         |  t
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | view        | public.sales_data_summary_view |
   ```

   The following example displays the outbound datashares in a producer cluster.

   ```
   SHOW DATASHARES LIKE 'sales%';
   ```

   The output looks similar to the following.

   ```
   share_name | share_owner  | source_database | consumer_database | share_type |     createdate      | is_publicaccessible  | share_acl | producer_account |          producer_namespace 
   -----------+--------------+-----------------+-------------------+------------+---------------------+----------------------+-----------+------------------+---------------------------------------
   salesshare |    100       | dev             |                   |  OUTBOUND  | 2020-12-09 02:27:08 |          True        |           |   123456789012   | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d
   ```

   For more information, see [DESC DATASHARE](r_DESC_DATASHARE.md) and [SHOW DATASHARES](r_SHOW_DATASHARES.md). 

   You can also use [SVV\$1DATASHARES](r_SVV_DATASHARES.md), [SVV\$1DATASHARE\$1CONSUMERS](r_SVV_DATASHARE_CONSUMERS.md), and [SVV\$1DATASHARE\$1OBJECTS](r_SVV_DATASHARE_OBJECTS.md) to view the datashares, the objects within the datashare, and the datashare consumers.

1. Drop datashares. For more information, see [DROP DATASHARE](r_DROP_DATASHARE.md).

   You can delete the datashare objects at any point using [DROP DATASHARE](r_DROP_DATASHARE.md). Cluster superusers and owners of datashare can drop datashares.

   The following example drops a datashare named `salesshare`.

   ```
   DROP DATASHARE salesshare;
   ```

   You can also use the Amazon Redshift console to delete datashares. For more information, see [Deleting a datashare created in your account](manage-datashare-existing-console.md#delete-datashare-console).

1. Use ALTER DATASHARE to remove objects from datashares at any point from the datashare. Use REVOKE USAGE ON to revoke permissions on the datashare to certain consumers. It revokes USAGE permissions on objects within a datashare and instantly stops access to all consumer clusters. Listing datashares and the metadata queries, such as listing databases and tables, doesn't return the shared objects after access is revoked.

   ```
   ALTER DATASHARE salesshare REMOVE TABLE public.tickit_sales_redshift;
   ```

   You can also use the Amazon Redshift console to edit datashares. For more information, see [Editing datashares created in your account](manage-datashare-existing-console.md#edit-datashare-console).

1. Revoke access to the datashare from namespaces if you don't want to share the data with the consumers anymore.

   ```
   REVOKE USAGE ON DATASHARE salesshare FROM NAMESPACE '13b8833d-17c6-4f16-8fe4-1a018f5ed00d';
   ```

   You can also use the Amazon Redshift console to edit datashares. For more information, see [Editing datashares created in your account](manage-datashare-existing-console.md#edit-datashare-console).

## Share data for read purposes as a consumer administrator


1. List the datashares that are made available to you and view the content of datashares. For more information, see [DESC DATASHARE](r_DESC_DATASHARE.md) and [SHOW DATASHARES](r_SHOW_DATASHARES.md).

   The following example displays the information of inbound datashares of a specified producer namespace. When you run DESC DATASHARE as a consumer administrator, you must specify the NAMESPACE option to view inbound datashares. 

   ```
   DESC DATASHARE salesshare OF NAMESPACE '13b8833d-17c6-4f16-8fe4-1a018f5ed00d';
   
   
    producer_account  |          producer_namespace          | share_type | share_name | object_type |           object_name           |   include_new
   -------------------+--------------------------------------+------------+------------+-------------+---------------------------------+------------------
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | table       | public.tickit_users_redshift    |     
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | table       | public.tickit_venue_redshift    |     
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | table       | public.tickit_category_redshift |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | table       | public.tickit_date_redshift     |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | table       | public.tickit_event_redshift    |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | table       | public.tickit_listing_redshift  |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | table       | public.tickit_sales_redshift    |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | schema      | public                          |    
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | view        | public.sales_data_summary_view  |
   ```

   Only cluster superusers can do this. You can also use SVV\$1DATASHARES to view the datashares and SVV\$1DATASHARE\$1OBJECTS to view the objects within the datashare.

   The following example displays the inbound datashares in a consumer cluster.

   ```
   SHOW DATASHARES LIKE 'sales%';
   
   
    share_name | share_owner | source_database | consumer_database | share_type | createdate | is_publicaccessible | share_acl | producer_account |          producer_namespace
   ------------+-------------+-----------------+-------------------+------------+------------+---------------------+-----------+------------------+--------------------------------------
    salesshare |             |                 |                   | INBOUND    |            |         t           |           |   123456789012   | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d
   ```

1. As a database superuser, you can create local databases that reference to the datashares. For more information, see [CREATE DATABASE](r_CREATE_DATABASE.md).

   ```
   CREATE DATABASE sales_db FROM DATASHARE salesshare OF NAMESPACE '13b8833d-17c6-4f16-8fe4-1a018f5ed00d';
   ```

   If you want more granular control over access to the objects in the local database, use the WITH PERMISSIONS clause when creating the database. This lets you grant object-level permissions for objects in the database in step 4. 

   ```
   CREATE DATABASE sales_db WITH PERMISSIONS FROM DATASHARE salesshare OF NAMESPACE '13b8833d-17c6-4f16-8fe4-1a018f5ed00d';
   ```

   You can see databases that you created from the datashare by querying the [SVV\$1REDSHIFT\$1DATABASES](r_SVV_REDSHIFT_DATABASES.md) view. You can connect to these databases directly, or you can connect to a local database on your consumer cluster and perform a cross-database query to query the data from the datashare databases. You can't create a datashare on top of database objects created from an existing datashare. However, you can copy the data into a separate table on the consumer cluster, perform any processing needed, and then share the new objects that were created.

   You can also use the Amazon Redshift console to create databases from datashares. For more information, see [Creating databases from datashares](query-datashare-console.md#create-database-from-datashare-console).

1. (Optional) Create external schemas to refer to and assign granular permissions to specific schemas in the consumer database imported on the consumer cluster. For more information, see [CREATE EXTERNAL SCHEMA](r_CREATE_EXTERNAL_SCHEMA.md).

   ```
   CREATE EXTERNAL SCHEMA sales_schema FROM REDSHIFT DATABASE 'sales_db' SCHEMA 'public';
   ```

1. Grant permissions on databases and schema references created from the datashares to users and roles in the consumer cluster as needed. For more information, see [GRANT](r_GRANT.md) or [REVOKE](r_REVOKE.md).

   ```
   GRANT USAGE ON DATABASE sales_db TO Bob;
   ```

   ```
   GRANT USAGE ON SCHEMA sales_schema TO ROLE Analyst_role;
   ```

   If you created your database without WITH PERMISSIONS, you can only assign permissions on the entire database created from the datashare to your users and roles. In some cases, you need fine-grained controls on a subset of database objects created from the datashare. If so, you can create an external schema reference that points to specific schemas in the datashare (as described in the previous step) and provide granular permissions at schema level. 

   You can also create late-binding views on top of shared objects and use these to assign granular permissions. You can also consider having producer clusters create additional datashares for you with the granularity required. 

   If you created your database with WITH PERMISSIONS in step 2, you must assign object-level permissions for objects in the shared database. A user with only the USAGE permission can’t access any objects in a database created with WITH PERMISSIONS until they’re granted additional object-level permissions..

   ```
   GRANT SELECT ON sales_db.public.tickit_sales_redshift to Bob;
   ```

1. Query data in the shared objects in the datashares.

   Users and roles with permissions on consumer databases and schemas on consumer clusters can explore and navigate the metadata of any shared objects. They can also explore and navigate local objects in a consumer cluster. To do this, they use JDBC or ODBC drivers or SVV\$1ALL and SVV\$1REDSHIFT views.

   Producer clusters might have many schemas in the database, tables, and views within each schema. The users on the consumer side can see only the subset of objects that are made available through the datashare. These users can't see the entire metadata from the producer cluster. This approach helps provide granular metadata security control with data sharing.

   You continue to connect to local cluster databases. But now, you can also read from the databases and schemas that are created from the datashare using the three-part database.schema.table notation. You can perform queries that span across any and all databases that are visible to you. These can be local databases on the cluster or databases created from the datashares. Consumer clusters can't connect to the databases created from the datashares.

   You can access the data using full qualification. For more information, see [Cross-database query examples](cross-database_example.md).

   ```
   SELECT * FROM sales_db.public.tickit_sales_redshift ORDER BY 1,2 LIMIT 5;
   
    salesid | listid | sellerid | buyerid | eventid | dateid | qtysold | pricepaid | commission |      saletime
   ---------+--------+----------+---------+---------+--------+---------+-----------+------------+---------------------
          1 |      1 |    36861 |   21191 |    7872 |   1875 |       4 |    728.00 |     109.20 | 2008-02-18 02:36:48
          2 |      4 |     8117 |   11498 |    4337 |   1983 |       2 |     76.00 |      11.40 | 2008-06-06 05:00:16
          3 |      5 |     1616 |   17433 |    8647 |   1983 |       2 |    350.00 |      52.50 | 2008-06-06 08:26:17
          4 |      5 |     1616 |   19715 |    8647 |   1986 |       1 |    175.00 |      26.25 | 2008-06-09 08:38:52
          5 |      6 |    47402 |   14115 |    8240 |   2069 |       2 |    154.00 |      23.10 | 2008-08-31 09:17:02
   ```

   You can only use SELECT statements on shared objects. However, you can create tables in the consumer cluster by querying the data from the shared objects in a different local database.

   In addition to queries, consumers can create views on shared objects. Only late-binding views or materialized views are supported. Amazon Redshift doesn't support regular views on shared data. Views that consumers create can span across multiple local databases or databases created from datashares. For more information, see [CREATE VIEW](r_CREATE_VIEW.md).

   ```
   // Connect to a local cluster database
                  
   // Create a view on shared objects and access it. 
   CREATE VIEW sales_data 
   AS SELECT * 
   FROM sales_db.public.tickit_sales_redshift 
   WITH NO SCHEMA BINDING;
   
   SELECT * FROM sales_data;
   ```

# Working with views in Amazon Redshift data sharing
Working with views

A producer cluster can share regular, late-binding, and materialized views. When sharing regular, late-binding views, or materialized views, you don't have to share the base tables. The following table shows how views are supported with data sharing.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/datashare-views.html)

The following query shows the output of a regular view that is supported with data sharing. For information about regular view definition, see [CREATE VIEW](r_CREATE_VIEW.md).

```
SELECT * FROM tickit_db.public.myevent_regular_vw 
ORDER BY eventid LIMIT 5;

   eventid  |  eventname
  ----------+-------------
     3835   | LeAnn Rimes
     3967   | LeAnn Rimes
     4856   | LeAnn Rimes
     4948   | LeAnn Rimes     
     5131   | LeAnn Rimes
```

The following query shows the output of a late-binding view that is supported with data sharing. For information about late-binding view definition, see [CREATE VIEW](r_CREATE_VIEW.md).

```
SELECT * FROM tickit_db.public.event_lbv 
ORDER BY eventid LIMIT 5;
         
 eventid | venueid | catid | dateid |          eventname           |      starttime
 --------+---------+-------+--------+------------------------------+---------------------
     1   |   305   |   8   |  1851  |        Gotterdammerung       | 2008-01-25 14:30:00
     2   |   306   |   8   |  2114  |           Boris Godunov      | 2008-10-15 20:00:00
     3   |   302   |   8   |  1935  |              Salome          | 2008-04-19 14:30:00
     4   |   309   |   8   |  2090  |  La Cenerentola (Cinderella) | 2008-09-21 14:30:00
     5   |   302   |   8   |  1982  |          Il Trovatore        | 2008-06-05 19:00:00
```

The following query shows the output of a materialized view that is supported with data sharing. For information about materialized view definition, see [CREATE MATERIALIZED VIEW](materialized-view-create-sql-command.md).

```
SELECT * FROM tickit_db.public.tickets_mv;

  catgroup | qtysold
 ----------+---------
  Concerts | 195444
    Shows  | 149905
```

You can maintain common tables across all tenants in a producer cluster. You can also share subsets of data filtered by dimension columns, such as `tenant_id` (`account_id` or `namespace_id`), to consumer clusters. To do this, you can define a view on the base table with a filter on these ID columns, for example `current_aws_account = tenant_id`. On the consumer side, when you query the view, you see only the rows that qualify for your account. To do this, you can use the Amazon Redshift context functions `current_aws_account` and `current_namespace`.

The following query returns the account ID in which the current Amazon Redshift cluster resides. You can run this query if you are connected to Amazon Redshift.

```
select current_user, current_aws_account;

current_user | current_aws_account
-------------+--------------------
dwuser       |    111111111111
(1row)
```

The following query returns the namespace of the current Amazon Redshift cluster. You can run this query if you are connected to the database.

```
select current_user, current_namespace; 

current_user | current_namespace
-------------+--------------------------------------
dwuser       | 86b5169f-01dc-4a6f-9fbb-e2e24359e9a8
(1 row)
```

## Incremental refresh for materialized views in a datashare


 Amazon Redshift supports incremental refresh for materialized views in a consumer datashare when the base tables or materializd views are shared. Incremental refresh is an operation where Amazon Redshift identifies changes in the base table or tables that happened after the previous refresh and updates only the corresponding records in the materialized view. For more information about this behavior, see [CREATE MATERIALIZED VIEW](https://docs.aws.amazon.com/redshift/latest/dg/materialized-view-create-sql-command.html#mv_CREATE_MARTERIALIZED_VIEW_datashare). 

# Adding data lake tables to a datashare


With a datashare, a data *producer* can securely share database objects of fine granularity, such as schemas and tables, with *consumers* in the same AWS account or in different accounts. The producer can also share objects across regions. This topic describes how to add objects from a data lake, specifically, from the AWS Glue data catalog, to a datashare. It covers two use cases:
+ *Adding a late-binding view to a datashare that references a table from a data lake* – This is convenient for a consumer, because preliminary configuration, such as defining permissions on the external source data, for example with Lake Formation, is likely already completed. An additional benefit is that a view added to a datashare can join tables from the data lake with Redshift native tables.
+ *Adding a table from an external schema to a datashare directly* – This makes objects from the data lake available to consumers with no additional layers or logic. Consumers can query the table or join it with tables on the consumer.

These cases apply after you reference a table from the AWS data catalog in Redshift using [CREATE EXTERNAL SCHEMA](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_EXTERNAL_SCHEMA.html). Any table from the AWS data catalog can be the source.

**Note**  
Data lake tables that you add to a datashare can include tables registered with Lake Formation and AWS Glue data catalog tables.

## Create an external schema and an external table


You create an external schema and an external table in order to add them to the datashare in the sections that follow. These are preliminary steps. If you have already done this, you can skip this section.

1. On the producer, create an external schema that references the data lake data stored in Amazon S3. The external schema references the AWS Glue Data Catalog. The role and region in the sample are examples:

   ```
   CREATE EXTERNAL SCHEMA external_schema_name FROM DATA CATALOG
   DATABASE 'glue_database_name'
   IAM_ROLE 'arn:aws:iam::123456789012:role/sample-role'
   REGION 'us-east-1';
   ```

1. Create a data lake table in the external schema.

   ```
   CREATE EXTERNAL TABLE external_schema_name.sales(
   salesid INTEGER,
   sellerid INTEGER,
   buyerid INTEGER,
   saledate DATE,
   pricepaid DECIMAL(8,2))
   ROW FORMAT delimited
   FIELDS TERMINATED BY '\t'
   STORED AS textfile
   LOCATION 's3://redshift-downloads/tickit/spectrum/sales/';
   ```

   The sample includes the `LOCATION`. It must be in the form `s3://{bucket_name}/{folder}/`, where the folder is specified. The folder must have a length of at least one character. You can optionally include subfolders. To see additional examples for creating tables in a data lake, see [Examples](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_EXTERNAL_TABLE_examples.html) for CREATE EXTERNAL TABLE.
**Note**  
Sharing is supported only for tables where the IAM role on the producer has SELECT access on the table.

## Add a late-binding view that references a data lake table to a datashare


When you create tables based on an external schema from the AWS data catalog, and you want to add them to a datashare, the most common way to do it is to add a Redshift late-binding view that references the table you created, which contains data from the data lake. The following procedure shows the steps:

1. Create a late-binding view that references the external table you created previously:

   ```
   CREATE VIEW lbv AS 
   select * from external_schema_name.sales, other_schema.t1
   WITH NO SCHEMA BINDING;
   ```

1. Add the view schema to the datashare. This is the local schema that contains the late-binding view. 

   ```
   ALTER DATASHARE dsx_datashare ADD SCHEMA public;
   ```

1. Add the schema that contains the table referenced by the late-binding view to the datashare. Adding the schema is required for any base tables referenced in a view that's added to a datashare, whether the schema contains local database objects or objects from a data lake. Note that you must add this schema before you add the late-binding view.

   ```
   ALTER DATASHARE dsx_datashare ADD SCHEMA external_schema_name;
   ALTER DATASHARE dsx_datashare ADD SCHEMA other_schema;
   ```

1. Add the view to the datashare, using a SQL command. Note that the table name includes the schema prefix.

   ```
   ALTER DATASHARE my_datashare ADD TABLE public.lbv;
   ```

1. Confirm that the view and schemas are successfully added to the datashare:

   ```
   SELECT * FROM svv_datashare_objects WHERE share_name = 'my_datashare';
   ```

1. The consumer administrator creates a database from the datashare and then grants usage to consumer users.

After you complete the steps, database consumer users with access to the datashare view can query the data.

## Add a data lake table directly to a datashare


Adding a table in an external schema to a datashare is similar to adding a view. This works well for a case where a consumer wants to query the data lake table in its original state, or if a consumer wants to join it to tables in the consumer data warehouse. The steps that follow show you how to add a data lake table to a datashare, using SQL.

1. Create an external schema and an external table, as described in the first section of this topic.

1. 

   Discover existing tables in the external schema, to confirm that the table you created is available:

   ```
   SELECT * FROM svv_external_tables WHERE schemaname = 'external_schema_name';
   ```

1. Add the external schema to the datashare:

   ```
   ALTER DATASHARE my_datashare ADD SCHEMA external_schema_name;
   ```

1. Add the external table to the datashare. Note that the table name includes the schema prefix:

   ```
   ALTER DATASHARE my_datashare ADD TABLE external_schema_name.sales;
   ```

1. Confirm that the table is successfully added to the datashare:

   ```
   SELECT * FROM svv_datashare_objects WHERE share_name = 'my_datashare';
   ```

   For more detailed instruction, see [Sharing read access to data within an AWS account](https://docs.aws.amazon.com/redshift/latest/dg/within-account.html).

1. On the consumer, which is the database receiving the shared data, the administrator associates the datashare to make shared tables available for users to query. For more information regarding how to perform this step, see [Managing datashares from other accounts as a consumer](manage-datashare-other-console.html).

After administrators complete the steps, database users on the consumer can write queries to retrieve data from the shared table and join it with other tables on the consumer.

## Usage notes for adding data lake objects to a datashare


There are several items to note when you use tables and views from a data lake in a datashare:
+ **Logging with AWS CloudTrail** – The data producer account can use AWS CloudTrail logs to audit when data lake tables shared through a datashare are accessed:
  + **Using log data to control data access** – The CloudTrail logs record details about who accesses shared tables, including both Redshift datashare producers and consumers. The identifiers are available in the `ExternalId` field under the `AssumeRole` CloudTrail logs. The data owner can configure additional limitations on data access in an IAM policy by means of actions. For more information about defining data access through policies, see [Access to AWS accounts owned by third parties](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html).
+ **Security and consumer permissions** – For Lake Formation registered tables, Amazon S3 resources are secured by Lake Formation and made available using the credentials provided by Lake Formation.

## Billing considerations for adding data lake objects to a datashare


The following details how costs are attributed for storing and scanning data lake objects in a datashare:
+ When a consumer queries shared objects from a data lake, the cost of scanning is billed to the consumer.
  + When the consumer is a provisioned cluster, Redshift uses Redshift Spectrum to scan Amazon S3 data. Therefore, the Spectrum cost is billed to the consumer account.
  + When the consumer is an Amazon Redshift Serverless workgroup, there is no separate charge for Spectrum.
+ Amazon S3 costs for storage and operations, such as listing buckets, is billed to the account that owns each Amazon S3 bucket.

For additional details regarding billing for Amazon Redshift Serverless, see [Billing for Amazon Redshift Serverless](https://docs.aws.amazon.com/redshift/latest/mgmt/serverless-billing.html). More billing and pricing information is available at [Amazon Redshift pricing](https://aws.amazon.com/redshift/pricing/).

# Sharing data across AWS accounts


You can share data for read purposes across AWS accounts. Sharing data across AWS accounts works similarly to sharing data within an account. The difference is that there is a two-way handshake required in sharing data across AWS accounts. A producer account administrators can either authorize consumer accounts to access datashares or choose not to authorize any access. To use an authorized datashare, a consumer account administrator can associate the datashare. The administrator can associate the datashare with an entire AWS account or with specific clusters in the consumer account, or decline the datashare. For more information about sharing data within an account, see [Sharing read access to data within an AWS account](within-account.md).

A datashare can have data consumers that are either namespaces in the same account or different AWS accounts. You don't need to create separate datashares for sharing within an account and cross-account sharing.

For cross-account data sharing, both the producer and consumer cluster must be encrypted.

When sharing data with AWS accounts, producer administrators share with the AWS account as an entity. A consumer administrator can decide which namespaces in the consumer account get access to a datashare.

**Topics**
+ [

# producer administrator actions
](producer-cluster-admin.md)
+ [

# Consumer account administrator actions
](consumer-account-admin.md)
+ [

# consumer administrator actions
](consumer-cluster-admin.md)

# producer administrator actions


With Amazon Redshift, you can perform administrative tasks on producer clusters to manage data ingestion and load processing.

**If you are a producer administrator or database owner** – follow these steps:

1. Create datashares in your cluster and add datashare objects to the datashares. For more detailed steps on how to create datashares and add datashare objects to datashares, see [Sharing read access to data within an AWS account](within-account.md). For information about the CREATE DATASHARE and ALTER DATASHARE, see [CREATE DATASHARE](r_CREATE_DATASHARE.md) and [ALTER DATASHARE](r_ALTER_DATASHARE.md).

   The following example adds different datashare objects to the datashare `salesshare`.

   ```
   -- Add schema to datashare
   ALTER DATASHARE salesshare ADD SCHEMA PUBLIC;
   
   -- Add table under schema to datashare
   ALTER DATASHARE salesshare ADD TABLE public.tickit_sales_redshift;
   
   -- Add view to datashare 
   ALTER DATASHARE salesshare ADD TABLE public.sales_data_summary_view;
   
   -- Add all existing tables and views under schema to datashare (does not include future table)
   ALTER DATASHARE salesshare ADD ALL TABLES in schema public;
   ```

   You can also use the Amazon Redshift console to create or edit datashares. For more information, see [Create a datashare](datashare-creation.md#create-datashare-console) and [Editing datashares created in your account](manage-datashare-existing-console.md#edit-datashare-console).

1. Delegate permissions to operate on the datashare. For more information, see [GRANT](r_GRANT.md) or [REVOKE](r_REVOKE.md).

   The following example grants permissions to `dbuser` on `salesshare`.

   ```
   GRANT ALTER, SHARE ON DATASHARE salesshare TO dbuser;
   ```

   Cluster superusers and the owners of the datashare can grant or revoke modification permissions on the datashare to additional users.

1. Add consumers to or remove consumers from datashares. The following example adds the AWS account ID to `salesshare`. For more information, see [GRANT](r_GRANT.md) or [REVOKE](r_REVOKE.md).

   ```
   GRANT USAGE ON DATASHARE salesshare TO ACCOUNT '123456789012';
   ```

   You can only grant permissions to one data consumer in a GRANT statement.

   Cluster superusers and the owners of datashare objects, or users that have SHARE permissions on the datashare, can add consumers to or remove consumers from a datashare. To do so, they use GRANT USAGE or REVOKE USAGE.

   You can also use the Amazon Redshift console to add or remove data consumers for datashares. For more information, see [Add data consumers to datashares](datashare-creation.md#add-data-consumer-console) and [Removing data consumers from datashares](manage-datashare-existing-console.md#remove-data-consumer-console).

1. (Optional) Revoke access to the datashare from AWS accounts if you don't want to share the data with the consumers anymore.

   ```
   REVOKE USAGE ON DATASHARE salesshare FROM ACCOUNT '123456789012';
   ```

**If you are a producer account administrator** – follow these steps:

After granting usage to the AWS account, the datashare status is `pending_authorization`. The producer account administrator should authorize datashares using the Amazon Redshift console and choose the data consumers.

Sign in to the [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/). Then choose which data consumers to authorize to access datashares or to remove authorization from. Authorized data consumers receive notifications to take actions on datashares. If you are adding a namespace as a data consumer, you don't have to perform authorization. After data consumers are authorized, they can access datashare objects and create a consumer database to query the data. For more information, see [Authorizing or removing authorization from datashares](authorize-datashare-console.md).

## Sharing write permissions to data across accounts


With Amazon Redshift, you can share data across AWS accounts and grant write permissions, enabling collaboration and data sharing between teams or organizations. Cross-account data sharing allows you to establish a data provider account that creates and manages databases, schemas, and tables, which can then be securely shared with data consumer accounts. The following sections demonstrate the process of configuring cross-account data sharing and granting write access in Amazon Redshift.

# Consumer account administrator actions


With Amazon Redshift, you can manage consumer accounts and control their access to your data warehousing resources.

**If you are a consumer account administrator** – follow these steps:

To associate one or more datashares that are shared from other accounts with your entire AWS account or specific namespaces in your account, use the Amazon Redshift console.

Sign in to the [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/). Then, associate one or more datashares that are shared from other accounts with your entire AWS account or specific namespaces in your account. For more information, see [Associating a datashare from a different AWS account in Amazon Redshift](writes-associating.md).

After the AWS account or specific namespaces are associated, the datashares become available for consumption. You can also change datashare association at any time. When changing association from individual namespaces to an AWS account, Amazon Redshift overwrites the namespaces with the AWS account information. When changing association from an AWS account to specific namespaces, Amazon Redshift overwrites the AWS account information with the namespace information. All namespaces in the account get access to the data.

# consumer administrator actions


With Amazon Redshift, you can perform administrative tasks on consumer clusters to manage data ingestion and load processing.

**If you are a consumer administrator** – follow these steps:

1. List the datashares made available to you and view the content of datashares. The content of datashares is available only when the producer administrator has authorized the datashares and the consumer administrator has accepted and associated the datashares. For more information, see [DESC DATASHARE](r_DESC_DATASHARE.md) and [SHOW DATASHARES](r_SHOW_DATASHARES.md).

   The following example displays the information of inbound datashares of a specified producer namespace. When you run the DESC DATAHSARE as a consumer administrator, you must specify the NAMESPACE and account ID to view inbound datashares. For outbound datashares, specify the datashare name.

   ```
   SHOW DATASHARES LIKE 'sales%';
   
   share_name | share_owner | source_database | consumer_database | share_type | createdate | is_publicaccessible | share_acl | producer_account |          producer_namespace
   -----------+-------------+-----------------+-------------------+------------+------------+---------------------+-----------+------------------+---------------------------------------
   salesshare |             |                 |                   | INBOUND    |            |        t            |           | 123456789012    | 'dd8772e1-d792-4fa4-996b-1870577efc0d'
   ```

   ```
   DESC DATASHARE salesshare OF ACCOUNT '123456789012' NAMESPACE 'dd8772e1-d792-4fa4-996b-1870577efc0d';
   
   
    producer_account |          producer_namespace          | share_type | share_name | object_type |           object_name
   ------------------+--------------------------------------+------------+------------+-------------+---------------------------------
    123456789012     | dd8772e1-d792-4fa4-996b-1870577efc0d | INBOUND    | salesshare | table       | public.tickit_users_redshift
    123456789012     | dd8772e1-d792-4fa4-996b-1870577efc0d | INBOUND    | salesshare | table       | public.tickit_venue_redshift
    123456789012     | dd8772e1-d792-4fa4-996b-1870577efc0d | INBOUND    | salesshare | table       | public.tickit_category_redshift
    123456789012     | dd8772e1-d792-4fa4-996b-1870577efc0d | INBOUND    | salesshare | table       | public.tickit_date_redshift
    123456789012     | dd8772e1-d792-4fa4-996b-1870577efc0d | INBOUND    | salesshare | table       | public.tickit_event_redshift
    123456789012     | dd8772e1-d792-4fa4-996b-1870577efc0d | INBOUND    | salesshare | table       | public.tickit_listing_redshift
    123456789012     | dd8772e1-d792-4fa4-996b-1870577efc0d | INBOUND    | salesshare | table       | public.tickit_sales_redshift
    123456789012     | dd8772e1-d792-4fa4-996b-1870577efc0d | INBOUND    | salesshare | schema      | public
   (8 rows)
   ```

   Only cluster superusers can do this. You can also use SVV\$1DATASHARES to view the datashares and SVV\$1DATASHARE\$1OBJECTS to view the objects within the datashare.

   The following example displays the inbound datashares in a consumer cluster.

   ```
   SELECT * FROM SVV_DATASHARES WHERE share_name LIKE 'sales%';
   
   share_name | share_owner | source_database | consumer_database | share_type | createdate | is_publicaccessible | share_acl | producer_account |          producer_namespace
   -----------+-------------+-----------------+-------------------+------------+------------+---------------------+-----------+------------------+---------------------------------------
   salesshare |             |                 |                   | INBOUND    |            |        t            |           | 123456789012      | 'dd8772e1-d792-4fa4-996b-1870577efc0d'
   ```

   ```
   SELECT * FROM SVV_DATASHARE_OBJECTS WHERE share_name LIKE 'sales%';
    share_type | share_name | object_type |           object_name           | producer_account |          producer_namespace
   ------------+------------+-------------+---------------------------------+------------------+--------------------------------------
    INBOUND    | salesshare | table       | public.tickit_users_redshift    | 123456789012     | dd8772e1-d792-4fa4-996b-1870577efc0d
    INBOUND    | salesshare | table       | public.tickit_venue_redshift    | 123456789012     | dd8772e1-d792-4fa4-996b-1870577efc0d
    INBOUND    | salesshare | table       | public.tickit_category_redshift | 123456789012     | dd8772e1-d792-4fa4-996b-1870577efc0d
    INBOUND    | salesshare | table       | public.tickit_date_redshift     | 123456789012     | dd8772e1-d792-4fa4-996b-1870577efc0d
    INBOUND    | salesshare | table       | public.tickit_event_redshift    | 123456789012     | dd8772e1-d792-4fa4-996b-1870577efc0d
    INBOUND    | salesshare | table       | public.tickit_listing_redshift  | 123456789012     | dd8772e1-d792-4fa4-996b-1870577efc0d
    INBOUND    | salesshare | table       | public.tickit_sales_redshift    | 123456789012     | dd8772e1-d792-4fa4-996b-1870577efc0d
    INBOUND    | salesshare | schema      | public                          | 123456789012     | dd8772e1-d792-4fa4-996b-1870577efc0d
   (8 rows)
   ```

1. Create local databases that reference to the datashares. Specify the NAMESPACE and account ID when creating the database from the datashare. For more information, see [CREATE DATABASE](r_CREATE_DATABASE.md).

   ```
   CREATE DATABASE sales_db FROM DATASHARE salesshare OF ACCOUNT '123456789012' NAMESPACE 'dd8772e1-d792-4fa4-996b-1870577efc0d';
   ```

   If you want more granular control over access to the objects in the local database, use the WITH PERMISSIONS clause when creating the database. This lets you grant object-level permissions for objects in the database in step 4. 

   ```
   CREATE DATABASE sales_db WITH PERMISSIONS FROM DATASHARE salesshare OF ACCOUNT '123456789012' NAMESPACE 'dd8772e1-d792-4fa4-996b-1870577efc0d';
   ```

   You can see databases that you created from the datashare by querying [SVV\$1REDSHIFT\$1DATABASES](r_SVV_REDSHIFT_DATABASES.md) view. You can connect to these databases directly, or you can connect to a local database on your consumer cluster and perform a cross-database query to query the data from the datashare databases. You can't create a datashare on top of database objects created from an existing datashare. However, you can copy the data into a separate table on the consumer cluster, perform any processing needed, and then share the new objects created.

1. (Optional) Create external schemas to refer and assign granular permissions to specific schemas in the consumer database imported on the consumer cluster. For more information, see [CREATE EXTERNAL SCHEMA](r_CREATE_EXTERNAL_SCHEMA.md).

   ```
   CREATE EXTERNAL SCHEMA sales_schema FROM REDSHIFT DATABASE 'sales_db' SCHEMA 'public';
   ```

1. Grant permissions on databases and schema references created from the datashares to user or roles in the consumer cluster as needed. For more information, see [GRANT](r_GRANT.md) or [REVOKE](r_REVOKE.md).

   ```
   GRANT USAGE ON DATABASE sales_db TO Bob;
   ```

   ```
   GRANT USAGE ON SCHEMA sales_schema TO ROLE Analyst_role;
   ```

   If you created your database without WITH PERMISSIONS, you can only assign permissions on the entire database created from the datashare to your users or roles. In some cases, you need fine-grained controls on a subset of database objects created from the datashare. If so, you can create an external schema reference pointing to specific schemas in the datashare, as described in the previous step. You can then provide granular permissions at the schema level. You can also create late-binding views on top of shared objects and use these to assign granular permissions. You can also consider having producer clusters create additional datashares for you with the granularity required. You can create as many schema references to the database created from the datashare as you need.

   If you created your database with WITH PERMISSIONS in step 2, you must assign object-level permissions for objects in the shared database. A user with only the USAGE permission can’t access any objects in a database created with WITH PERMISSIONS until they’re granted additional object-level permissions..

   ```
   GRANT SELECT ON sales_db.public.tickit_sales_redshift to Bob;
   ```

1. Query data in the shared objects in the datashares.

   Users and roles with permissions on consumer databases and schemas on consumer clusters can explore and navigate the metadata of any shared objects. They can also explore and navigate local objects in a consumer cluster. To do this, use JDBC or ODBC drivers or SVV\$1ALL and SVV\$1REDSHIFT views.

   Producer clusters might have many schemas in the database, tables, and views within each schema. The users on the consumer side can see only the subset of objects that are made available through the datashare. These users can't see all the metadata from the producer cluster. This approach helps provide granular metadata security control with data sharing.

   You continue to connect to local cluster databases. But now, you can also read from the databases and schemas that are created from the datashare using the three-part database.schema.table notation. You can perform queries that span across any and all databases that are visible to you. These can be local databases on the cluster or databases created from the datashares. Consumer clusters can't connect to the databases created from the datashares.

   You can access the data using full qualification. For more information, see [Cross-database query examples](cross-database_example.md).

   ```
   SELECT * FROM sales_db.public.tickit_sales_redshift;
   ```

   You can only use SELECT statements on shared objects. However, you can create tables in the consumer cluster by querying the data from the shared objects in a different local database.

   In addition to performing queries, consumers can create views on shared objects. Only late-binding views and materialized views are supported. Amazon Redshift doesn't support regular views on shared data. Views that consumers create can span across multiple local databases or databases created from datashares. For more information, see [CREATE VIEW](r_CREATE_VIEW.md).

   ```
   // Connect to a local cluster database
                  
   // Create a view on shared objects and access it. 
   CREATE VIEW sales_data 
   AS SELECT * 
   FROM sales_db.public.tickit_sales_redshift 
   WITH NO SCHEMA BINDING;
   
   SELECT * FROM sales_data;
   ```

# Sharing data across AWS Regions


You can share data for read purposes across Amazon Redshift clusters in AWS Regions. With cross-Region data sharing, you can share data across AWS Regions without the need to copy data manually. You don't have to unload your data into Amazon S3 and copy the data into a new Amazon Redshift cluster or perform cross-Region snapshot copy.

With cross-Region data sharing, you can share data across clusters in the same AWS account, or in different AWS accounts even when the clusters are in different Regions. When sharing data with Amazon Redshift clusters that are in the same AWS account but different AWS Regions, follow the same workflow as sharing data within an AWS account. For more information, see [Sharing read access to data within an AWS account](within-account.md).

If clusters sharing data are in different AWS accounts and AWS Regions, you can follow the same workflow as sharing data across AWS accounts and include Region-level associations on the consumer cluster. Cross-Region data sharing supports datashare association with the entire AWS account, the entire AWS Region, or specific namespaces within an AWS Region. For more information about sharing data across AWS accounts, see [Sharing data across AWS accounts](across-account.md).

When consuming data from a different Region, the consumer pays the Cross-Region data transfer fee from the producer region to the consumer Region.

To use the datashare, a consumer account administrator can associate the datashare in one of the following three ways.
+ Association with an entire AWS account spanning all its AWS Regions
+ Association with a specific AWS Region in an AWS account
+ Association with specific namespaces within an AWS Region

When the administrator chooses the entire AWS account, all existing and future namespaces across different AWS Regions in the account have access to the datashares. A consumer account administrator can also choose specific AWS Regions or namespaces within a Region to grant them access to the datashares.

**If you are a producer administrator or database owner**, create a datashare, add database objects and data consumers to the datashare, and grant permissions to data consumers. For more information, see [producer administrator actions](producer-cluster-admin.md).

**If you are a producer account administrator**, authorize datashares using the AWS Command Line Interface (AWS CLI) or the Amazon Redshift console and choose the data consumers. 

**If you are a consumer account administrator** – follow these steps:

To associate one or more datashares that are shared from other accounts to your entire AWS account or specific AWS Regions or namespaces within an AWS Region, use the Amazon Redshift console. 

With cross-Region data sharing, you can add clusters in a specific AWS Region using the AWS Command Line Interface (AWS CLI) or Amazon Redshift console.

To specify one or more AWS Regions, you can use the `associate-data-share-consumer` CLI command with the optional `consumer-region` option.

With the CLI, the following example associates the `Salesshare` with the entire AWS account with the `associate-entire-account` option. You can only associate one Region at a time.

```
aws redshift associate-data-share-consumer
--region {PRODUCER_REGION}
--data-share-arn arn:aws:redshift:{PRODUCER_REGION}:{PRODUCER_ACCOUNT}:datashare:{PRODUCER_CLUSTER_NAMESPACE}/Salesshare
--associate-entire-account
```

The following example associates the `Salesshare` with the US East (Ohio) Region (`us-east-2`).

```
aws redshift associate-data-share-consumer
--region {PRODUCER_REGION}
--data-share-arn arn:aws:redshift:{PRODUCER_REGION}:0123456789012:datashare:{PRODUCER_CLUSTER_NAMESPACE}/Salesshare
--consumer-region 'us-east-2'
```

The following example associates the `Salesshare` with a specific consumer namespace in another AWS account in the Asia Pacific (Sydney) Region (`ap-southeast-2`).

```
aws redshift associate-data-share-consumer
--data-share-arn arn:aws:redshift:{PRODUCER_REGION}:{PRODUCER_ACCOUNT}:datashare:{PRODUCER_CLUSTER_NAMESPACE}/Salesshare
--consumer-arn 'arn:aws:redshift:ap-southeast-2:{CONSUMER_ACCOUNT}:namespace:{ConsumerImmutableClusterId}'
```

You can use the Amazon Redshift console to associate datashares with your entire AWS account or specific AWS Regions or namespaces within an AWS Region. To do this, sign in to the [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/). Then associate one or more datashares that are shared from other accounts with your entire AWS account, the entire AWS Region, or a specific namespace within an AWS Region. For more information, see [Associating a datashare from a different AWS account in Amazon Redshift](writes-associating.md).

After the AWS account or specific namespaces are associated, the datashares become available for consumption. You can also change datashare association at any time. When changing association from individual namespaces to an AWS account, Amazon Redshift overwrites the namespaces with the AWS account information. When changing association from an AWS account to specific namespaces, Amazon Redshift overwrites the AWS account information with the namespace information. When changing association from an entire AWS account to specific AWS Regions and namespaces, Amazon Redshift overwrites the AWS account information with the specific Region and namespace information.

**If you are a consumer administrator**, you can create local databases that reference to the datashares and grant permissions on databases created from the datashares to user or roles in the consumer cluster as needed. You can also create views on shared objects and create external schemas to refer and assign granular permissions to specific schemas in the consumer database imported on the consumer cluster. For more information, see [consumer administrator actions](consumer-cluster-admin.md).

# Managing cost control for cross-Region data sharing
Managing cost control for cross-Region data sharing

With Amazon Redshift, you can manage cost control for cross-Region data sharing by configuring data sharing to limit the amount of data that is transferred between AWS Regions. Managing cost control for cross-Region data sharing allows you to set data transfer limits, monitor data transfer usage, and receive notifications when approaching or exceeding those limits.

When consuming data from a different Region, the consumer pays the Cross-Region data transfer fee from the producer Region to the consumer Region. The price of data transfer is different for different Regions. The charge is based on the bytes of data scanned for every successful query run. For more information about Amazon Redshift pricing, see [Amazon Redshift pricing](https://aws.amazon.com/redshift/pricing/).

You are charged for the number of bytes, rounded up to the next megabyte, with a 10MB minimum per query. You can set cost controls on your query usage and view the amount of data being transferred per query on your cluster.

To monitor and control your usage and associated cost of using cross-Region data sharing, you can create daily, weekly, monthly usage limits, and define actions that Amazon Redshift automatically takes if those limits are reached to help maintain your budget with predictability.

Depending on the usage limits you set, actions that Amazon Redshift takes can be to log an event to a system table, send a CloudWatch alarm and notify an administrator with an Amazon SNS, or to turn off cross-Region data sharing for further usage.

To create usage limits in the Amazon Redshift console, choose **Configure usage limit** under **Actions** for your cluster. You can monitor your usage trends and get alerts on usage exceeding your defined limits with automatically generated CloudWatch metrics from the **Cluster performance** or **Monitoring** tabs. You can also create, modify, and delete usage limits programmatically by using the AWS CLI or Amazon Redshift API operations. 

# Sharing licensed Amazon Redshift data on AWS Data Exchange


When creating AWS Data Exchange datashares and adding them to an AWS Data Exchange product, providers can license data in Amazon Redshift that consumers can discover, subscribe to, and query up-to-date data in Amazon Redshift when they have active AWS Data Exchange subscriptions.

With AWS Data Exchange datashares added to an AWS Data Exchange product, consumers automatically have access to a product's datashares when their subscription starts and retain their access as long as their subscription is active.

**Topics**
+ [

# Working with AWS Data Exchange datashares as a producer
](adx-getting-started-producer.md)
+ [

## Working with AWS Data Exchange datashares as a consumer
](#adx-getting-started-consumer)

# Working with AWS Data Exchange datashares as a producer


With Amazon Redshift, you can share live data products with AWS Data Exchange as a producer by creating and managing datashares. 

**If you are a producer administrator, follow these steps to manage AWS Data Exchange datashares on the Amazon Redshift console:**

1. Create datashares in your cluster to share data on AWS Data Exchange and grant access to AWS Data Exchange to the datashares.

   Cluster superuser and database owners can create datashares. Each datashare is associated with a database during creation. Only objects from that database can be shared in that datashare. Multiple datashares can be created on the same database with the same or different granularity of objects. There is no limit on the number of datashares you can create on a cluster.

   You can also use the Amazon Redshift console to create datashares. For more information, see [Create a datashare](datashare-creation.md#create-datashare-console).

   Use the MANAGEDBY ADX option to implicitly grant access of the datashare to AWS Data Exchange when running the CREATE DATASHARE statement. This indicates that AWS Data Exchange manages this datashare. You can only use the MANAGEDBY ADX option when you create a new datashare. You can't use the ALTER DATASHARE statement to modify an existing datashare to add the MANAGEDBY ADX option. Once a datashare is created with the MANAGEDBY ADX option, only AWS Data Exchange can access and manage the datashare.

   ```
   CREATE DATASHARE salesshare
   [[SET] MANAGEDBY [=] {ADX} ];
   ```

1. Add objects to the datashares. Producer administrator continues to manage datashare objects that are available in an AWS Data Exchange datashare.

   To add objects to a datashare, add the schema before adding objects. When you add a schema, Amazon Redshift doesn't add all the objects under it. You must add them explicitly. For more information, see [ALTER DATASHARE](r_ALTER_DATASHARE.md).

   ```
   ALTER DATASHARE salesshare ADD SCHEMA PUBLIC;
   ALTER DATASHARE salesshare ADD TABLE public.tickit_sales_redshift;
   ALTER DATASHARE salesshare ADD ALL TABLES IN SCHEMA PUBLIC;
   ```

   You can also add views to a datashare.

   ```
   CREATE VIEW public.sales_data_summary_view AS SELECT * FROM public.tickit_sales_redshift;
   ALTER DATASHARE salesshare ADD TABLE public.sales_data_summary_view;
   ```

   Use ALTER DATASHARE to share schemas, and tables, views, and functions in a given schema. Superusers, datashare owners, or users who have ALTER or ALL permissions on the datashare can alter the datashare to add objects to or remove objects from it. Users should have the permissions to add or remove objects from the datashare. Users should also be the owners of the objects or have SELECT, USAGE, or ALL permissions on the objects.

   Use the INCLUDENEW clause to add any new tables, views, or SQL user-defined functions (UDFs) created in a specified schema to the datashare. Only superusers can change this property for each datashare-schema pair.

   ```
   ALTER DATASHARE salesshare ADD SCHEMA PUBLIC;
   ALTER DATASHARE salesshare SET INCLUDENEW = TRUE FOR SCHEMA PUBLIC;
   ```

   You can also use the Amazon Redshift console to add or remove objects from datashares. For more information, see [Add datashare objects to datashares](datashare-creation.md#add-datashare-object-console), [Removing datashare objects from datashares](manage-datashare-existing-console.md#remove-datashare-object-console), and [Editing AWS Data Exchange datashares](manage-adx-datashare-console.md#edit-adx-datashare-console).

1. To authorize access to the datashares for AWS Data Exchange, do one of the following:
   + Explicitly authorize access to the datashare for AWS Data Exchange by using the `ADX` keyword in the `aws redshift authorize-data-share` API. This allows AWS Data Exchange to recognize the datashare in the service account and manage associating consumers to the datashare.

     ```
     aws redshift authorize-data-share 
     --data-share-arn arn:aws:redshift:us-east-1:{PRODUCER_ACCOUNT}:datashare:{PRODUCER_CLUSTER_NAMESPACE}/salesshare 
     --consumer-identifier ADX
     ```

     You can use a conditional key `ConsumerIdentifier` for the `AuthorizeDataShare` and `DeauthorizeDataShare` APIs to explicitly allow or deny AWS Data Exchange to make calls to the two APIs in the IAM policy.

------
#### [ JSON ]

****  

     ```
     {
         "Version":"2012-10-17",		 	 	 
         "Statement": [
             {
                 "Sid": "VisualEditor0",
                 "Effect": "Deny",
                 "Action": [
                     "redshift:AuthorizeDataShare",
                     "redshift:DeauthorizeDataShare"
                 ],
                 "Resource": "*",
                 "Condition": {
                     "StringEqualsIgnoreCase": {
                         "redshift:ConsumerIdentifier": "ADX"
                     }
                 }
             }
         ]
     }
     ```

------
   + Use the Amazon Redshift console to authorize or remove authorization of AWS Data Exchange datashares. For more information, see [Authorizing or removing authorization from datashares](authorize-datashare-console.md).
   + Optionally, you can implicitly authorize access to the AWS Data Exchange datashare when importing the datashare into an AWS Data Exchange dataset.

   To remove authorization for access to the AWS Data Exchange datashares, use the `ADX` keyword in the `aws redshift deauthorize-data-share` API operation. By doing this, you allow AWS Data Exchange to recognize the datashare in the service account and manage removing association from the datashare.

   ```
   aws redshift deauthorize-data-share 
   --data-share-arn arn:aws:redshift:us-east-1:{PRODUCER_ACCOUNT}:datashare:{PRODUCER_CLUSTER_NAMESPACE}/salesshare 
   --consumer-identifier ADX
   ```

1. List datashares created in the cluster and look into the contents of the datashare.

   The following example displays the information of a datashare named salesshare. For more information, see [DESC DATASHARE](r_DESC_DATASHARE.md) and [SHOW DATASHARES](r_SHOW_DATASHARES.md).

   ```
   DESC DATASHARE salesshare;
                  
    producer_account  |          producer_namespace          | share_type | share_name | object_type |           object_name          |   include_new
   -------------------+--------------------------------------+------------+------------+-------------+--------------------------------+-------------------
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | table       | public.tickit_users_redshift   |   
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | table       | public.tickit_venue_redshift   |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | table       | public.tickit_category_redshift|
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | table       | public.tickit_date_redshift    |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | table       | public.tickit_event_redshift   |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | table       | public.tickit_listing_redshift |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | table       | public.tickit_sales_redshift   |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | schema      | public                         |  t
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | view        | public.sales_data_summary_view |
   ```

   The following example displays the outbound datashares in a producer cluster.

   ```
   SHOW DATASHARES LIKE 'sales%';
   ```

   The output looks similar to the following.

   ```
   share_name | share_owner  | source_database | consumer_database | share_type |     createdate      | is_publicaccessible  | share_acl | producer_account |          producer_namespace 
   -----------+--------------+-----------------+-------------------+------------+---------------------+----------------------+-----------+------------------+---------------------------------------
   salesshare |    100       | dev             |                   |  OUTBOUND  | 2020-12-09 02:27:08 |          True        |           |   123456789012   | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d
   ```

   For more information, see [DESC DATASHARE](r_DESC_DATASHARE.md) and [SHOW DATASHARES](r_SHOW_DATASHARES.md). 

   You can also use [SVV\$1DATASHARES](r_SVV_DATASHARES.md), [SVV\$1DATASHARE\$1CONSUMERS](r_SVV_DATASHARE_CONSUMERS.md), and [SVV\$1DATASHARE\$1OBJECTS](r_SVV_DATASHARE_OBJECTS.md) to view the datashares, the objects within the datashare, and the datashare consumers.

1. Drop datashares. We recommend that you don't delete an AWS Data Exchange datashare shared to other AWS accounts using the DROP DATASHARE statement. Those accounts will lose access to the datashare. This action is irreversible. This might breach data product offer terms in AWS Data Exchange. If you want to delete an AWS Data Exchange datashare, see [DROP DATASHARE usage notes](r_DROP_DATASHARE.md#r_DROP_DATASHARE_usage).

   The following example drops a datashare named salesshare.

   ```
   DROP DATASHARE salesshare;
   ERROR:  Drop of ADX-managed datashare salesshare requires session variable datashare_break_glass_session_var to be set to value '620c871f890c49'
   ```

   To allow dropping an AWS Data Exchange datashare, set the datashare\$1break\$1glass\$1session\$1var variable and run the DROP DATASHARE statement again. If you want to delete an AWS Data Exchange datashare, see [DROP DATASHARE usage notes](r_DROP_DATASHARE.md#r_DROP_DATASHARE_usage).

   You can also use the Amazon Redshift console to delete datashares. For more information, see [Deleting AWS Data Exchange datashares created in your account](manage-adx-datashare-console.md#delete-adx-datashare-console).

1. Use ALTER DATASHARE to remove objects from datashares at any point from the datashare. Use REVOKE USAGE ON to revoke permissions on the datashare to certain consumers. It revokes USAGE permissions on objects within a datashare and instantly stops access to all consumer clusters. Listing datashares and the metadata queries, such as listing databases and tables, doesn't return the shared objects after access is revoked.

   ```
   ALTER DATASHARE salesshare REMOVE TABLE public.tickit_sales_redshift;
   ```

   You can also use the Amazon Redshift console to edit datashares. For more information, see [Editing AWS Data Exchange datashares](manage-adx-datashare-console.md#edit-adx-datashare-console).

1. Grant or revoke GRANT USAGE from AWS Data Exchange datashares. You can't grant or revoke GRANT USAGE for AWS Data Exchange datashare. The following example shows an error when the GRANT USAGE permission is granted to an AWS account for a datashare that AWS Data Exchange manages.

   ```
   CREATE DATASHARE salesshare MANAGEDBY ADX;
   ```

   ```
   GRANT USAGE ON DATASHARE salesshare TO ACCOUNT '012345678910';
   ERROR:  Permission denied to add/remove consumer to/from datashare salesshare. Datashare consumers are managed by ADX.
   ```

   For more information, see [GRANT](r_GRANT.md) or [REVOKE](r_REVOKE.md).

**If you are a producer administrator, follow these steps to create and publish a datashare product on the AWS Data Exchange console:**
+ When the AWS Data Exchange datashare has been created, the producer creates a new dataset, imports assets, creates a revision, and creates and publishes a new product.

  Use the Amazon Redshift console to create datasets. For more information, see [Creating data sets on AWS Data Exchange](manage-adx-datashare-console.md#create-dataset-console).

  For more information, see [Providing data products on AWS Data Exchange](https://docs.aws.amazon.com/data-exchange/latest/userguide/providing-data-sets.html).

## Working with AWS Data Exchange datashares as a consumer


With Amazon Redshift, you can access and analyze datasets from AWS Data Exchange without having to store or manage copies of the data. 

**If you are a consumer, follow these steps to discover data products that contain AWS Data Exchange datashares and query Amazon Redshift data:**

1. On the AWS Data Exchange console, discover and subscribe to data products that contains AWS Data Exchange datashares.

   Once your subscription starts, you can access licensed Amazon Redshift data that is imported as assets to datasets that contain AWS Data Exchange datashares.

   For more information on how to get started with using data products that contain AWS Data Exchange datashares, see [Subscribing to data products on AWS Data Exchange](https://docs.aws.amazon.com/data-exchange/latest/userguide/subscribe-to-data-sets.html).

1. On the Amazon Redshift console, create an Amazon Redshift cluster, if needed.

   For information on how to create a cluster, see [Creating a cluster](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-clusters-console.html#create-cluster).

1. List the datashares that are made available to you and view the content of datashares. For more information, see [DESC DATASHARE](r_DESC_DATASHARE.md) and [SHOW DATASHARES](r_SHOW_DATASHARES.md).

   The following example displays the information of inbound datashares of a specified producer namespace. When you run DESC DATASHARE as a consumer administrator, you must specify the ACCOUNT and NAMESPACE option to view inbound datashares. 

   ```
   DESC DATASHARE salesshare of ACCOUNT '123456789012' NAMESPACE '13b8833d-17c6-4f16-8fe4-1a018f5ed00d';
   
    producer_account  |          producer_namespace          | share_type | share_name | object_type |           object_name           |   include_new
   -------------------+--------------------------------------+------------+------------+-------------+---------------------------------+------------------
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | table       | public.tickit_users_redshift    |     
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | table       | public.tickit_venue_redshift    |     
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | table       | public.tickit_category_redshift |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | table       | public.tickit_date_redshift     |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | table       | public.tickit_event_redshift    |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | table       | public.tickit_listing_redshift  |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | table       | public.tickit_sales_redshift    |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | schema      | public                          |    
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | view        | public.sales_data_summary_view  |
   ```

   Only cluster superusers can do this. You can also use SVV\$1DATASHARES to view the datashares and SVV\$1DATASHARE\$1OBJECTS to view the objects within the datashare.

   The following example displays the inbound datashares in a consumer cluster.

   ```
   SHOW DATASHARES LIKE 'sales%';
   
   
    share_name | share_owner | source_database | consumer_database | share_type | createdate | is_publicaccessible | share_acl | producer_account |          producer_namespace
   ------------+-------------+-----------------+-------------------+------------+------------+---------------------+-----------+------------------+--------------------------------------
    salesshare |             |                 |                   | INBOUND    |            |         t           |           |   123456789012   | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d
   ```

1. Create local databases that reference to the datashares. You must specify the ACCOUNT and NAMESPACE option to create local databases for AWS Data Exchange datashares. For more information, see [CREATE DATABASE](r_CREATE_DATABASE.md).

   ```
   CREATE DATABASE sales_db FROM DATASHARE salesshare OF ACCOUNT '123456789012' NAMESPACE '13b8833d-17c6-4f16-8fe4-1a018f5ed00d';
   ```

   If you want more granular control over access to the objects in the local database, use the WITH PERMISSIONS clause when creating the database. This lets you grant object-level permissions for objects in the database in step 6. 

   ```
   CREATE DATABASE sales_db WITH PERMISSIONS FROM DATASHARE salesshare OF ACCOUNT '123456789012' NAMESPACE '13b8833d-17c6-4f16-8fe4-1a018f5ed00d';
   ```

   You can see databases that you created from the datashare by querying the [SVV\$1REDSHIFT\$1DATABASES](r_SVV_REDSHIFT_DATABASES.md) view. You can connect to these databases directly, or you can connect to a local database on your consumer cluster and perform a cross-database query to query the data from the datashare databases. You can't create a datashare on top of database objects created from an existing datashare. However, you can copy the data into a separate table on the consumer cluster, perform any processing needed, and then share the new objects that were created.

   You can also use the Amazon Redshift console to create databases from datashares. For more information, see [Creating databases from datashares](query-datashare-console.md#create-database-from-datashare-console).

1. (Optional) Create external schemas to refer to and assign granular permissions to specific schemas in the consumer database imported on the consumer cluster. For more information, see [CREATE EXTERNAL SCHEMA](r_CREATE_EXTERNAL_SCHEMA.md).

   ```
   CREATE EXTERNAL SCHEMA sales_schema FROM REDSHIFT DATABASE 'sales_db' SCHEMA 'public';
   ```

1. Grant permissions on databases and schema references created from the datashares to user or roles in the consumer cluster as needed. For more information, see [GRANT](r_GRANT.md) or [REVOKE](r_REVOKE.md).

   ```
   GRANT USAGE ON DATABASE sales_db TO Bob;
   ```

   ```
   GRANT USAGE ON SCHEMA sales_schema TO ROLE Analyst_role;
   ```

    If you created your database without WITH PERMISSIONS, you can only assign permissions on the entire database created from the datashare to your users and roles. In some cases, you need fine-grained controls on a subset of database objects created from the datashare. If so, you can create an external schema reference that points to specific schemas in the datashare (as described in the previous step) and provide granular permissions at schema level. 

   You can also create late-binding views on top of shared objects and use these to assign granular permissions. You can also consider having producer clusters create additional datashares for you with the granularity required. You can create as many schema references to the database created from the datashare as you need.

   If you created your database with WITH PERMISSIONS in step 4, you must assign object-level permissions for objects in the shared database. A user with only the USAGE permission can’t access any objects in a database created with WITH PERMISSIONS until they’re granted additional object-level permissions..

   ```
   GRANT SELECT ON sales_db.public.tickit_sales_redshift to Bob;
   ```

1. Query data in the shared objects in the datashares.

   Users and roles with permissions on consumer databases and schemas on consumer clusters can explore and navigate the metadata of any shared objects. They can also explore and navigate local objects in a consumer cluster. To do this, they use JDBC or ODBC drivers, the SHOW commands, or SVV\$1ALL and SVV\$1REDSHIFT views.

   Producer clusters might have many schemas in the database, tables, and views within each schema. The users on the consumer side can see only the subset of objects that are made available through the datashare. These users can't see the entire metadata from the producer cluster. This approach helps provide granular metadata security control with data sharing.

   You continue to connect to local cluster databases. But now, you can also read from the databases and schemas that are created from the datashare using the three-part database.schema.table notation. You can perform queries that span across any and all databases that are visible to you. These can be local databases on the cluster or databases created from the datashares. Or, you can directly connect to these consumer databases and run queries against the shared objects with partial notation.

   You can access the data using full qualification. For more information, see [Cross-database query examples](cross-database_example.md).

   ```
   SELECT * FROM sales_db.public.tickit_sales_redshift ORDER BY 1,2 LIMIT 5;
   
    salesid | listid | sellerid | buyerid | eventid | dateid | qtysold | pricepaid | commission |      saletime
   ---------+--------+----------+---------+---------+--------+---------+-----------+------------+---------------------
          1 |      1 |    36861 |   21191 |    7872 |   1875 |       4 |    728.00 |     109.20 | 2008-02-18 02:36:48
          2 |      4 |     8117 |   11498 |    4337 |   1983 |       2 |     76.00 |      11.40 | 2008-06-06 05:00:16
          3 |      5 |     1616 |   17433 |    8647 |   1983 |       2 |    350.00 |      52.50 | 2008-06-06 08:26:17
          4 |      5 |     1616 |   19715 |    8647 |   1986 |       1 |    175.00 |      26.25 | 2008-06-09 08:38:52
          5 |      6 |    47402 |   14115 |    8240 |   2069 |       2 |    154.00 |      23.10 | 2008-08-31 09:17:02
   ```

   You can only use SELECT statements on shared objects. However, you can create tables in the consumer cluster by querying the data from the shared objects in a different local database.

   In addition to queries, consumers can create views on shared objects. Only late-binding views or materialized views are supported. Amazon Redshift doesn't support regular views on shared data. Views that consumers create can span across multiple local databases or databases created from datashares. For more information, see [CREATE VIEW](r_CREATE_VIEW.md).

   ```
   // Connect to a local cluster database
                  
   // Create a view on shared objects and access it. 
   CREATE VIEW sales_data 
   AS SELECT * 
   FROM sales_db.public.tickit_sales_redshift 
   WITH NO SCHEMA BINDING;
   
   SELECT * FROM sales_data;
   ```

# Getting started with AWS Lake Formation-managed datashares
Getting started with AWS Lake Formation-managed datashares

With Amazon Redshift, you can access and share live data across AWS accounts and Amazon Redshift clusters through AWS Lake Formation-managed datashares. AWS Lake Formation datashares enable data providers to securely share live data from their Amazon S3 data lake with any consumer, including other AWS accounts and Amazon Redshift clusters.

# Working with Lake Formation-managed datashares as a producer
Working with Lake Formation-managed datashares as a producer

With Amazon Redshift, you can access and analyze data shared through AWS Lake Formation datashares. AWS Lake Formation datashares enable secure data sharing across AWS accounts and Amazon Redshift clusters without having to copy or move the underlying data.

Sharing data to AWS Lake Formation lets you centrally define AWS Lake Formation permissions of Amazon Redshift datashares and restrict user access to objects within a datashare.

With Amazon Redshift, you can securely share live data across AWS accounts and Amazon Redshift clusters using AWS Lake Formation-managed datashares as a producer. A Lake Formation-managed datashare is an object that allows you to share live data from your Amazon Redshift cluster with other AWS accounts and services.

As a producer cluster or workgroup administrator, follow these steps to share datashares to Lake Formation:

1. Create datashares in your cluster and authorize AWS Lake Formation to access the datashares.

   Only cluster superuser and database owners can create datashares. Each datashare is associated with a database during creation. Only objects from that database can be shared in that datashare. Multiple datashares can be created on the same database with the same or different granularity of objects. There is no limit on the number of datashares you can create on a cluster.

   ```
   CREATE DATASHARE salesshare;
   ```

1. Add objects to the datashare. The producer cluster or workgroup administrator continues to manage datashare objects that are available. To add objects to a datashare, add the schema before adding objects. When you add a schema, Amazon Redshift doesn't add all the objects under it. You must add them explicitly. For more information, see [ALTER DATASHARE](https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_DATASHARE.html). 

   ```
   ALTER DATASHARE salesshare ADD SCHEMA PUBLIC;
   ALTER DATASHARE salesshare ADD TABLE public.tickit_sales_redshift;
   ALTER DATASHARE salesshare ADD ALL TABLES IN SCHEMA PUBLIC;
   ```

   You can also add views to a datashare. Supported views are standard views, late binding views, and materialized views.

   ```
   CREATE VIEW public.sales_data_summary_view AS SELECT * FROM public.tickit_sales_redshift;
   ALTER DATASHARE salesshare ADD TABLE public.tickit_sales_redshift;
   ```

   Use ALTER DATASHARE to share schemas, tables, and views, in a given schema. Superusers, datashare owners, or users who have ALTER or ALL permissions on the datashare can alter the datashare to add objects to or remove objects from it. Database users should be the owners of the objects or have SELECT, USAGE, or ALL permissions on the objects. 

   Use the INCLUDENEW clause to add any new tables and views created in a specified schema to the datashare. Only superusers can change this property for each datashare-schema pair.

   ```
   ALTER DATASHARE salesshare ADD SCHEMA PUBLIC;
   ALTER DATASHARE salesshare SET INCLUDENEW = TRUE FOR SCHEMA PUBLIC;
   ```

1. Grant access of the datashare to a Lake Formation administrator account.

   ```
   GRANT USAGE ON DATASHARE salesshare TO ACCOUNT '012345678910' VIA DATA CATALOG;
   ```

   To revoke usage, use the following command.

   ```
   REVOKE USAGE ON DATASHARE salesshare FROM ACCOUNT '012345678910' VIA DATA CATALOG;
   ```

1. Authorize access to the datashare for Lake Formation by using the `aws redshift authorize-data-share` API operation. Doing so lets Lake Formation recognize the datashare in the service account and manage associating consumers to the datashare.

   ```
   aws redshift authorize-data-share 
   --data-share-arn arn:aws:redshift:us-east-1:{PRODUCER_ACCOUNT}:datashare:{PRODUCER_CLUSTER_NAMESPACE}/salesshare 
   --consumer-identifier {"DataCatalog/<consumer-account-id>"}
   ```

    To remove authorization from Lake Formation-managed datashares, use the `aws redshift deauthorize-data-share` API operation. By doing so, you allow AWS Lake Formation to recognize the datashare in the service account and remove authorization. 

   ```
   aws redshift deauthorize-data-share 
   --data-share-arn arn:aws:redshift:us-east-1:{PRODUCER_ACCOUNT}:datashare:{PRODUCER_CLUSTER_NAMESPACE}/salesshare 
   --consumer-identifier {"DataCatalog/<consumer-account-id>"}
   ```

    At any time, if the producer cluster or workgroup administrator decides that there is no longer a need to share data with the consumer cluster or workgroup, they can use DROP DATASHARE to delete the datashare, deauthorize the datashare, or revoke datashare permissions. The associated permissions and objects in Lake Formation are not automatically deleted. 

   ```
   DROP DATASHARE salesshare;
   ```

    After authorizing the Lake Formation account to manage the datashare, the Lake Formation administrator can discover the shared datashare, associate the dateshare with an Data Catalog ARN, and create a database in the AWS Glue Data Catalog linking to the datashare. To associate datashares using the AWS CLI, use the command [ associate-data-share-consumer](https://docs.aws.amazon.com/cli/latest/reference/redshift/associate-data-share-consumer.html). To share a datashare across AWS Regions, specify the `--region` parameter in the `associate-data-share-consumer` command or use the AWS console to choose your data consumers. The following example demonstrates how to share a Lake Formation-managed datashare across Regions. 

   ```
   aws redshift associate-data-share-consumer --region <region-1>
   --data-share-arn 'arn:aws:redshift:us-east-1:12345678912:datashare:035c45ea-61ce-86f0-8b75-19ac6102c3b7/sample_share' 
   --consumer-arn 'arn:aws:glue:<region-1>:111912345678:catalog'
   ```

   The Lake Formation administrator must also create local resources that define how objects within the datashare should map to objects within Lake Formation. For more information about discovering datashares and creating local resources, see [Managing permissions for data in an Amazon Redshift datashare](https://docs.aws.amazon.com/lake-formation/latest/dg/data-sharing-redshift.html). 

# Working with Lake Formation-managed datashares as a consumer
Working with Lake Formation-managed datashares as a consumer

With Amazon Redshift, you can access and analyze data shared with you through AWS Lake Formation datashares. A datashare is a data product that contains a collection of data objects, such as tables or databases, from different data sources. 

 After the AWS Lake Formation administrator discovers the datashare invitation and creates a database in the AWS Glue Data Catalog that links to the datashare, the consumer cluster or workgroup administrator can associate the cluster with the datashare and the database in the AWS Glue Data Catalog, create a database local to the consumer cluster or workgroup, and grant access to users and roles in the Amazon Redshift consumer cluster or workgroup to start querying. Follow these steps to set up querying permissions. 

1. On the Amazon Redshift console, create an Redshift cluster to serve as the consumer cluster or workgroup, if needed. For information on how to create a cluster, see [ Creating a cluster](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-clusters-console.html#create-cluster).

1. To list which databases in the AWS Glue Data Catalog consumer cluster or workgroup users have access to, run the [SHOW DATABASES](https://docs.aws.amazon.com/redshift/latest/dg/r_SHOW_DATABASES.html) command.

   ```
   SHOW DATABASES FROM DATA CATALOG [ACCOUNT <account-id>,<account-id2>] [LIKE <expression>]
   ```

   Doing so lists the resources that are available from the Data Catalog, such as the AWS Glue database’s ARN, database name, and information about the datashare.

1. Using the AWS Glue database ARN from SHOW DATABASES, create a local database in the consumer cluster or workgroup. For more information, see [CREATE DATABASE](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_DATABASE.html).

   ```
   CREATE DATABASE lf_db FROM ARN <lake-formation-database-ARN> WITH [NO] DATA CATALOG SCHEMA [<schema>];
   ```

1. Grant access on databases and schema references created from the datashares to users and roles in the consumer cluster or workgroup as needed. For more information, see [GRANT](https://docs.aws.amazon.com/redshift/latest/dg/r_GRANT.html) or [REVOKE](https://docs.aws.amazon.com//redshift/latest/dg/r_REVOKE.html). Note that users created from the [CREATE USER](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_USER.html) command cannot access objects in datashare that have been shared to Lake Formation. Only users with access to both Redshift and Lake Formation can access datashares that have been shared with Lake Formation. 

   ```
   GRANT USAGE ON DATABASE sales_db TO IAM:Bob;
   ```

    As a consumer cluster or workgroup administrator, you can only assign permissions on the entire database created from the datashare to your users and roles. In some cases, you need fine-grained controls on a subset of database objects created from the datashare. 

    You can also create late-binding views on top of shared objects and use these to assign granular permissions. You can also consider having producer clusters or workgroups create additional datashares for you with the granularity required. You can create as many schema references to the database created from the datashare. 

1. Database users can use the views SVV\$1EXTERNAL\$1TABLES and SVV\$1EXTERNAL\$1COLUMNS to find all of the shared tables or columns within the AWS Glue database

   ```
   SELECT * from svv_external_tables WHERE redshift_database_name = 'lf_db';
                           
   SELECT * from svv_external_columns WHERE redshift_database_name = 'lf_db';
   ```

1. Query data in the shared objects in the datashares.

   Users and roles with permissions on consumer databases and schemas on consumer clusters or workgroups can explore and navigate the metadata of any shared objects. They can also explore and navigate local objects in a consumer cluster or workgroup. To do so, they can use the JDBC or ODBC drivers or the SVV\$1ALL and SVV\$1EXTERNAL views.

   ```
   SELECT * FROM lf_db.schema.table;
   ```

   You can only use SELECT statements on shared objects. However, you can create tables in the consumer cluster by querying the data from the shared objects in a different local database.

   ```
   // Connect to a local cluster database
               
   // Create a view on shared objects and access it.
   
   CREATE VIEW sales_data 
   AS SELECT * 
   FROM sales_db.public.tickit_sales_redshift 
   WITH NO SCHEMA BINDING;
   
   SELECT * FROM sales_data;
   ```

# Getting started with multi-warehouse writes using data sharing in Amazon Redshift
Getting started with multi-warehouse writes

You can share database objects for both reads and writes across different Amazon Redshift clusters or Amazon Redshift Serverless workgroups within the same AWS account, across accounts, and across regions. The procedures in this topic show how to set up data sharing that includes write permissions. You can grant permissions such as SELECT, INSERT, and UPDATE for different tables and USAGE and CREATE for schemas. 

Data is live and available to all warehouses as soon as you commit a write transaction Producer account administrators can determine whether or not specific namespaces or regions get read-only, read-and-write, or any access to the data. The procedures assume you're working in a database in a provisioned cluster or Amazon Redshift Serverless workgroup.

With Amazon Redshift, you can manage data sharing with writes using the console or the SQL interface to control access and govern data across Amazon Redshift clusters and AWS accounts. The following sections provide step-by-step instructions on configuring and managing data sharing with writes using Amazon Redshift.

For a list of Regions where data sharing is available, see [AWS Regions where data sharing is available](data_sharing_regions.md). For considerations and limitations for writes, see [Considerations for data sharing in Amazon Redshift](datashare-considerations.md).

**Note**  
Amazon Redshift multi-warehouse writes using data sharing is only supported on Amazon Redshift patch 186 for provisioned clusters on current track version 1.0.78881 or greater, and for Amazon Redshift Serverless workgroups on version 1.0.78890 or greater. 

**Topics**
+ [

# Connecting to a database in Amazon Redshift
](connect-database-console-writes.md)
+ [

# Producer actions for new datashares in Amazon Redshift
](writes-producer-new.md)
+ [

# Consumer actions for new datashares in Amazon Redshift
](writes-consumer-new.md)
+ [

# Producer actions for existing datashares in Amazon Redshift
](writes-producer-existing.md)
+ [

# Consumer actions for existing datashares in Amazon Redshift
](writes-consumer-existing.md)

# Connecting to a database in Amazon Redshift
Connecting to a database

With Amazon Redshift, you can establish a connection to your data warehouse cluster and execute SQL queries, load data, or perform administrative tasks. Connecting to a database refers to the process of creating a secure channel between a client application or tool and the Amazon Redshift cluster. The following sections provide step-by-step instructions on how to connect to an Amazon Redshift database.

------
#### [ Console ]

Connect to a database to view databases and objects within databases or to view datashares in the Amazon Redshift data warehouse. The user credentials used to connect to a specified database must have the necessary permissions to view all datashares.

If there is no local connection, do one of the following:
+ If you are a producer administrator, go to the **Clusters** tab for provisioned clusters, or the **Namespaces configuration** tab for Serverless endpoints. Select the respective cluster or namespace from the list. 
+ In the cluster or namespace details page, from the **Datashares** tab, choose **Connect to database** and do one of the following:
  + In the **Datashares from other namespaces and AWS accounts** section, view datashares from other clusters, namespaces, or accounts.
  + In the **Datashares created in my cluster** section, view datashares in your cluster.
+ On the **Connect to database** window, do one of the following:
  + If you choose **Create a new connection**, choose **AWS Secrets Manager** to use a stored secret to authenticate access for the connection. 

    Or, choose **Temporary credentials** to use database credentials to authenticate access for the connection. Specify values for **Database name** and **Database user**.

    Choose **Connect**.
  + Choose **Use a recent connection** to connect to another database that you have the necessary permissions.

    Amazon Redshift automatically makes the connection.

After a database connection is established, you can start creating datashares, querying datashares, or creating databases from datashares.

------

# Producer actions for new datashares in Amazon Redshift
Producer actions for new datashares

With Amazon Redshift, you can share live data across Amazon Redshift clusters or AWS accounts using datashares. A datashare is a consumer-producer object that allows you to share live data from your Amazon Redshift cluster with other clusters or AWS accounts. Creating datashares enables secure data sharing while maintaining control over access and ensuring data remains up-to-date. The following sections provide details on creating datashares and adding database objects such as schemas, tables, and views to share live data securely.

**Topics**
+ [

# Creating a datashare in Amazon Redshift
](writes-creating-datashare.md)
+ [

# Adding objects to a datashare in Amazon Redshift
](writes-adding-datashare.md)
+ [

# Adding data consumers to a datashare in Amazon Redshift
](writes-adding-data-consumer.md)
+ [

# Authorizing a datashare in Amazon Redshift
](writes-authorizing.md)

# Creating a datashare in Amazon Redshift
Creating a datashare

A datashare is a logical container of database objects, permissions, and consumers. Consumers are Amazon Redshift provisioned clusters or Amazon Redshift Serverless namespaces in your account and other AWS accounts. Each datashare is associated with the database it's created in and only objects from that database can be added. As a producer administrator, you can create datashares on the console and with SQL by following one of the below procedures.

------
#### [ Console ]

On the console, you can create datashares from the **Datashares** tabs in the cluster or namespace details page. After the datashare is created, you can create databases from the datashare on a consumer as a consumer administrator.

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Clusters**, then choose your cluster. The cluster details page appears.

1. In the cluster or namespace details page, from the **Datashares** tab, in the **Datashares** section, connect to a database if you don't have a database connection. In the **Datashares created in my account** section, choose **Create datashare**. The **Create datashare** page appears.

1. Choose **Create datashare**. You can only create a datashare from a local database. If you haven't connected to the database, the **Connect to database** page appears. Follow the steps in [Connecting to a database](connect-database-console.md) to connect to a database. If there is a recent connection, the **Create datashare** page appears. 

1. In the **Datashare information** section, choose one of the following:
   + Choose **Datashare** to create datashares to share data for read or write purpose across different Amazon Redshift data warehouses (provisioned clusters or Serverless endpoints) or in the same AWS account or different AWS accounts.
   + Choose **AWS Data Exchange datashare** to create datashares to license your data through AWS Data Exchange.

1. Specify values for **Datashare name**, **Database name**, and **Publicly accessible**. When you change the database name, make a new database connection.

1. Add objects to your datashare either using the **Scoped permissions** or **Direct permissions** sections. To add objects to a datashare, see [Creating a datashare in Amazon Redshift](#writes-creating-datashare). 

1. In the **Data consumers** section, you can choose to publish to Amazon Redshift, or publish to the AWS Glue Data Catalog, which starts the process of sharing data with Lake Formation. Publishing your datashare to Amazon Redshift means sharing your data with another namespace or Amazon Redshift account that acts as the consumer.
**Note**  
Once the datashare is created, you can't edit the configuration to publish to the other option.

1. Choose **Create datashare**.

------
#### [ SQL ]

The following command creates a datashare:

```
CREATE DATASHARE salesshare;
```

At the time of datashare creation, each datashare is associated with a database. Only objects from that database can be shared in that datashare. Multiple datashares can be created on the same database with the same or different granularity of objects. There is no limit on the number of datashares a cluster can create. You can also use the Amazon Redshift console to create datashares. For more information, see [CREATE DATASHARE](r_CREATE_DATASHARE.md).

You can also control security restrictions to the datashare during creation. The following example shows that the consumer with a public IP access is allowed to read the datashare.

```
CREATE DATASHARE my_datashare [PUBLICACCESSIBLE = TRUE];
```

Setting PUBLICACCESSIBLE = TRUE allows consumers to query your datashare from publicly accessible clusters and provisioned workgroups. Leave this out or explicitly set it to false if you do not want to allow it.

You can modify properties about the type of consumers after you create a datashare. For example, you can define that clusters that want to consume data from a given datashare can't be publicly accessible. Queries from consumer clusters that don't meet security restrictions specified in datashare are rejected at query runtime. For more information, see [ALTER DATASHARE](r_ALTER_DATASHARE.md).

------

# Adding objects to a datashare in Amazon Redshift
Adding objects to a datashare

You can add database objects of various types on the console and with SQL by following one of the below procedures. 

------
#### [ Console ]

You can add objects to your datashare either using **Scoped permissions** or **Direct permissions** sections. Select either **Grant scoped permissions** or **Grant direct permissions** to add objects. Select the **Add** button to add objects. A dialog appears. Perform the following steps:

1. If you select **Grant scoped permissions**, the **Grant scoped permissions** page appears where you can grant scoped permissions at either a database or a schema level. Datashares with scoped permissions have the specified permissions on all current and future objects within the database or schema. For more details see, [Scoped permissions](t_scoped-permissions.md).

   1. Next, select **Database scoped permissions** to grant scoped permissions at the database level. When you grant scoped permissions, they apply to the current database while creating the datashare. These permissions can’t be granted to individual objects and are applicable to both existing and new objects (schemas, tables, views, UDFs).

   1. Select the scoped permission(s) for the schemas, table or views or user-defined functions. This means all objects in the database have the selected permissions granted to consumers. Select **Grant** to complete granting database scoped permissions. 

   1. Then, select **Schema scoped permissions** to grant scoped permissions at the schema level. When you grant schema-scoped permissions, all objects added to the schema have the specified datashare permissions. 

   1. Select the schema you want to add to the datashare from the dropdown. You can select only a single schema at a time. Then, select direct permission(s) you want to grant on the selected schema.

   1. Select scoped permission(s) for your schema objects such as tables, views and user-defined functions. Permissions are granted on all matching objects in the schema. These can be either existing objects or those added in the future. When it's applied, you can't remove a permission from an object without revoking the scoped permissions. 

   1. Select **Grant** to complete granting schema scoped permissions. 

1. If you select **Grant direct permissions**, the **Grant direct permissions** page appears where you can grant direct permissions at each objects level such as schema, table, view or user-defined function. To grant direct permissions, you must first add the relevant schemas to the datashare.

   1. Next, select **Grant direct permissions** to schemas to apply direct permissions on specific schema. Then, select schema permission(s) for your schema objects such as tables, views and user-defined functions and select the schema you want added to the datashare. Select **Grant** to complete adding schema to the datashare. 

   1. After you have a schema added to your datashare, you can proceed with adding direct permissions for your schema objects. Select **Grant direct permissions ** again. The **Grant direct permissions** page appears. Then, navigate to the direct permissions tabs for schema objects.

   1. Select **Grant direct permissions to tables and views** to grant object level direct permissions on these objects. Select the required direct permission(s) and the required objects from the list. Use the search field to find datashare objects. Select Grant to complete adding tables and views to the datashare. 

   1. Select **Grant direct permissions to user-defined functions** to grant object level direct permissions on user-defined functions. Select the required direct permission(s) and the required objects from the list. Use the search field to find datashare objects. Select **Grant** to complete adding user-defined functions to the datashare. 

1. You can also choose whether you want to **Add future objects**. When you choose to include datashare objects added to the schema, it means that objects added to the schema are added to the datashare automatically.

1. Choose **Add** to complete the section and add the objects. They're listed under the **Datashare objects**.

1. After you add objects, you can select individual objects and edit their permissions. If you select a schema, a dialog appears that asks if you would like to add **Scoped permissions**. This makes it so each existing or added object to the schema has a pre-selected set of permissions, appropriate for the object type. For instance, the administrator can set that all added tables have SELECT and UPDATE permissions, for instance.

1. All datashare objects are listed under the **Scoped permissions** or **Direct permissions** sections.

1. In the **Data consumers** section, you can add namespaces or add AWS accounts as consumers of the datashare. 

1. Choose **Create datashare** to save your changes.

After you create the datashare, it appears in the list under **Datashares created in my namespace**. If you choose a datashare from the list, you can view its consumers, its objects, and other properties.

------
#### [ SQL ]

With SQL, the datashare owner must grant USAGE on the schemas they want to add to the datashare. The GRANT is used to grant various actions on the schema, including CREATE and USAGE. The schemas hold shared objects:

```
CREATE SCHEMA myshared_schema1;
CREATE SCHEMA myshared_schema2;
 
GRANT USAGE ON SCHEMA myshared_schema1 TO DATASHARE my_datashare;
GRANT CREATE, USAGE ON SCHEMA myshared_schema2 TO DATASHARE my_datashare;
```

Alternatively, the administrator can continue to run ALTER commands to add a schema to the datashare. Only USAGE permissions are granted when a schema is added this way.

```
ALTER DATASHARE my_datashare ADD SCHEMA myshared_schema1;
```

After the administrator adds schemas, they can grant datashare permissions on objects in the schema. These can be both read and write permissions. The GRANT ALL sample shows how to grant all permissions.

```
GRANT SELECT, INSERT ON TABLE myshared_schema1.table1, myshared_schema1.table2, myshared_schema2.table1
TO DATASHARE my_datashare;
                     
GRANT ALL ON TABLE myshared_schema1.table4 TO DATASHARE my_datashare;
```

You can continue to run commands like ALTER DATASHARE to add tables. When you do, only SELECT permissions are granted on the objects added.

```
ALTER DATASHARE my_datashare ADD TABLE myshared_schema1.table1, myshared_schema1.table2, myshared_schema2.table1;
```

You can grant scoped permissions to a datashare on all objects of a type within a database or schema. Datashares with scoped permissions have the specified permissions on all current and future objects within the database or schema.

 You can view the scope of database-level scoped permissions in [SVV\$1DATABASE\$1PRIVILEGES](r_SVV_DATABASE_PRIVILEGES.md). You can view the scope of schema-level scoped permissions in [SVV\$1SCHEMA\$1PRIVILEGES](r_SVV_SCHEMA_PRIVILEGES.md).

The following is the syntax for granting scoped permissions to datashares. For more information about scoped permissions, see [Scoped permissions](t_scoped-permissions.md).

```
GRANT { CREATE | USAGE | ALTER | DROP } [,...] | ALL [ PRIVILEGES ] }FOR SCHEMAS IN
DATABASE db_name 
TO DATASHARE { datashare_name}

GRANT { { SELECT | INSERT | UPDATE | DELETE | DROP | ALTER | TRUNCATE | REFERENCES } [, ...] } | ALL [PRIVILEGES] } }FOR TABLES IN
{SCHEMA schema_name [DATABASE db_name ] | DATABASE db_name }
TO DATASHARE { datashare_name}

GRANT { EXECUTE | ALL [ PRIVILEGES ] }FOR FUNCTIONS IN
{SCHEMA schema_name [DATABASE db_name ] | DATABASE db_name }
TO DATASHARE { datashare_name}
```

------

# Adding data consumers to a datashare in Amazon Redshift
Adding data consumers to a datashare

You can add one or more data consumers to the datashares on the console or with SQL. Data consumers can be namespaces that uniquely identified Amazon Redshift clusters or AWS accounts.

------
#### [ Console ]

You must explicitly choose to turn off or turn on sharing your datashare to clusters with public access.
+ Choose **Add namespaces to the datashare**. Namespaces are globally unique identifier (GUID) for Amazon Redshift cluster.
+ Choose **Add AWS accounts** to the datashare. The specified AWS accounts must have access permissions to the datashare.

------
#### [ SQL ]

With SQL, the administrator grants usage on the datashare to a specific namespace in the account. You can find the namespace ID as part of the ARN in the cluster details page, in the Amazon Redshift Serverless namespace details page, or by running the command `SELECT current_namespace;`. For more information, see [CURRENT\$1NAMESPACE](https://docs.aws.amazon.com/redshift/latest/dg/r_CURRENT_NAMESPACE.html).

```
GRANT USAGE ON DATASHARE my_datashare TO NAMESPACE '86b5169f-012a-234b-9fbb-e2e24359e9a8';
```

The following is an example of how to grant usage of a datashare to an AWS account. 

```
GRANT USAGE ON DATASHARE salesshare TO ACCOUNT '123456789012';
```

The following is an example of how to grant usage of a datashare to a Lake Formation account.

```
GRANT USAGE ON DATASHARE salesshare TO ACCOUNT '123456789012' VIA DATA CATALOG;
```

------

# Authorizing a datashare in Amazon Redshift
Authorizing a datashare

With Amazon Redshift, you can control access to datashares by authorizing specified consumers. Datashares allow you to share live data across Amazon Redshift clusters in the same or different AWS accounts, providing a seamless way to distribute and consume analytical data. This section provides step-by-step instructions for authorizing and revoking consumer access to your datashares in Amazon Redshift.

**Note**  
If you are adding a namespace as a data consumer, you don't have to perform authorization. To authorize a datashare, there must be at least one data consumer added to the datashare.

------
#### [ Console ]

As a producer administrator on the console, you can choose which data consumers to authorize to access datashares or to remove authorization from. Authorized data consumers receive notifications to take actions on datashares. If you are adding a namespace as a data consumer, you don't have to perform authorization.

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Datashares**. From here you can see a list called **Datashares consumers**. Choose one or more consumer clusters that you want to authorize. Then choose **Authorize**.

1. The **Authorize account** dialog appears. You can choose among a couple authorization types. 
   + **Read-only on [cluster name or workgroup name]** – This means that no write permissions are available on the consumer, even if the datashare creator granted write permissions.
   + **Read and write on [cluster name or workgroup name]** – This means that all permissions granted by the creator, including write permissions, are available on the consumer.

1. Choose **Save**.

You can also authorize AWS Data Exchange as a consumer.

1. If you chose **Publish to AWS Glue Data Catalog** when creating the datashare, you can only grant authorization of the datashare to a Lake Formation account.

   For AWS Data Exchange datashare, you can only authorize one datashare at a time.

   When you authorize an AWS Data Exchange datashare, you are sharing the datashare with the AWS Data Exchange service and allowing AWS Data Exchange to manage access to the datashare on your behalf. AWS Data Exchange allows access to consumers by adding consumer accounts as data consumers to the AWS Data Exchange datashare when they subscribe to the products. AWS Data Exchange doesn't have read access to the datashare.

1. Choose **Save**.

After data consumers are authorized, they can access datashare objects and create a consumer database to query the data. 

------
#### [ API ]

The producer security administrator determines the following:
+ Whether or not another account can have access to the datashare.
+ If an account has access to the datashare, whether or not that account has write permissions.

The following IAM permissions are required to authorize a datashare: 

**redshift:AuthorizeDataShare**

You can authorize usage and writes using either a CLI call or with the API:

```
authorize-data-share
--data-share-arn <value>
--consumer-identifier <value>
[--allow-writes | --no-allow-writes]
```

For more information about the command, see [authorize-data-share](https://docs.aws.amazon.com/cli/latest/reference/redshift/authorize-data-share.html).

The consumer identifier can be either:
+ A twelve digit AWS account ID.
+ The namespace identifier ARN.

**Note**  
Write permissions aren’t granted at the authorizing step. Authorizing a datashare for writes just allows the account to have write permissions that were granted by the datashare administrator. If an administrator does not allow writes, the only permissions available to the specific consumer are SELECT, USAGE, and EXECUTE.

You can change the authorization of a datashare consumer by calling `authorize-data-share` again, but with a different value. The old authorization is overwritten by the new authorization. So if you originally authorize and allow writes, but re-authorize and specify `no-allow-writes` or simply do not specify a value, the consumer will have their write permissions revoked.

------

# Consumer actions for new datashares in Amazon Redshift
Consumer actions for new datashares

With Amazon Redshift, you can consume datashares from other AWS accounts, enabling cross-account data sharing and collaboration. A datashare is a secure way to share live data across Amazon Redshift clusters, even if they are in different AWS accounts. The following sections provide detailed steps for configuring access, creating databases from datashares, granting object level permissions, and querying shared data.

**Topics**
+ [

# Associating a datashare from a different AWS account in Amazon Redshift
](writes-associating.md)
+ [

# Creating a database from a datashare in Amazon Redshift
](writes-creating-database.md)
+ [

# Granting object level permissions to consumer users and roles in Amazon Redshift
](writes-granting.md)
+ [

# Querying data in a datashare in Amazon Redshift
](writes-querying.md)

# Associating a datashare from a different AWS account in Amazon Redshift
Associating a datashare

With Amazon Redshift, you can associate datashares shared by other AWS accounts, enabling seamless and secure data sharing across organizational boundaries. Datashares are shareable database objects that encapsulate data from one or more Amazon Redshift databases. The following sections demonstrate the process of associating datashares.

------
#### [ Console ]

As a consumer administrator, you can associate one or more datashares that are shared from other accounts to your entire AWS account or specific namespaces in your account. 

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Datashares**. The datashare list page appears. Choose **From other accounts**.

1. In the **Datashares from other accounts** section, choose the datashare that you want to associate and choose **Associate**. When the **Associate** datashare page appears, choose one of the following association types:
   + Choose **Entire AWS account** to associate all existing and future namespaces across different AWS Regions in your AWS account with the datashare.
   + If the datashare is published to the AWS Glue Data Catalog, you can only associate the datashare with the entire AWS account.

1. From here you can choose **Allowed permissions**. The choices are:
   + **Read-only** – If you choose read only, write permissions like UPDATE or INSERT aren't available on the consumer, even if these permissions were granted and authorized on the producer.
   + **Read and write** – Consumer datashare users will have all of the permissions, both read and write, that were granted and authorized by the producer.

1. Alternatively, choose **Specific AWS Regions and namespaces** to associate one or more AWS Regions and specific namespaces with the datashare. Choose **Add Region** to add specific AWS Regions and namespaces to the datashare. The **Add AWS Region** page appears.

1. Choose an **AWS Region**. 

1. Do one of the following:
   + Choose **Add all namespaces** to add all existing and future namespaces in this Region to the datashare.
   + Choose **Add specific namespaces** to add one or more specific namespaces in this Region to the datashare.
   + Choose one or more namespaces and choose **Add AWS Region**.

1. Choose **Associate**.

It's possible for the producer to go back and change settings for an authorization, which can affect association settings on consumers.

 If you're associating the datashare with a Lake Formation account, go to the Lake Formation console to create a database, then define permissions over the database. For more information, see [ Setting up permissions for Amazon Redshift datashares](https://docs.aws.amazon.com/lake-formation/latest/dg/setup-ds-perms.html) in the AWS Lake Formation Developer Guide. Once you create a AWS Glue database or a federated database, you can use query editor v2 or any preferred SQL client with your consumer cluster to query the data. 

After the datashare is associated, the datashares become available.

**Note**  
You can also change datashare association at any time. When changing association from specific AWS Regions and namespaces to the entire AWS account, Amazon Redshift overwrites the specific Region and namespaces information with AWS account information. All the AWS Regions and namespaces in the AWS account then have access to the datashare.

------
#### [ API ]

**Note**  
The steps in this section are performed after the producer administrator grants specific actions on the shared database objects and, if the datashare is being shared with another account, the producer security administrator authorizes access.

The consumer security administrator determines the following:
+ Whether or not all namespaces in an account, namespaces in specific regions in the account, or specific namespaces have access to the datashare.
+ If namespaces have access to the datashare, whether or not those namespace have write permissions.

The consumer security administrator can associate the datashare with the following command:

```
associate-data-share-consumer
--data-share-arn <value>
--consumer-identifier <value>
[--allow-writes | --no-allow-writes]
```

For more information about the command, see [associate-data-share-consumer](https://docs.aws.amazon.com/cli/latest/reference/redshift/associate-data-share-consumer.html).

The consumer security administrator must explicitly set `allow-writes` to true when associating a datashare with a namespace, to allow use of INSERT and UPDATE commands. If they don't, the users can perform only read operations, such as SELECT, USAGE, or EXECUTE privileges.

You can change the association of a namespace for a datashare by calling `associate-data-share-consumer` again, with a different value. The old association is overwritten by the new association, so if you originally associate and set `allow-writes`, but associate and specify `no-allow-writes`, or simply do not specify a value, the consumer will have their write permissions revoked.

------

# Creating a database from a datashare in Amazon Redshift
Creating a database from a datashare

With Amazon Redshift, you can use a datashare to create a database, and then query data across datashares from producer clusters to securely access live data without copying or transferring it. The following steps cover the details of setting up a database in your Amazon Redshift environment.

------
#### [ Console ]

Before you can query data in the datashare, you must create a database from a datashare. You can create only one database from a specified datashare.

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Clusters**, then choose your cluster. The cluster details page appears.

1. Choose **Datashares**. The datashare list appears.

1. In the **Datashares from other clusters** section, choose **Connect to database**. For more information, see [Connecting to a database](connect-database-console.md).

1. Choose a datashare that you want to create databases from, then choose **Create database from datashare**. The Create database from datashare page appears.

1. In the **Database name**, specify a database name. The database name must be 1–64 alphanumeric characters (lowercase only) and it can't be a reserved word.

1. Choose **Create**.

After the database is created, you can query data in the database or perform write operations, if they have been granted, authorized, and associated by the consumer administrator.

------
#### [ API ]

To share data for read purposes as a consumer administrator, perform the following steps. 

1. List the datashares that are made available to you and view the content of datashares. For more information, see [DESC DATASHARE](r_DESC_DATASHARE.md) and [SHOW DATASHARES](r_SHOW_DATASHARES.md).

   The following example displays the information of inbound datashares of a specified producer namespace. When you run DESC DATASHARE as a consumer administrator, you must specify the NAMESPACE option to view inbound datashares. 

   ```
   DESC DATASHARE salesshare OF NAMESPACE '13b8833d-17c6-4f16-8fe4-1a018f5ed00d';
   
   
    producer_account  |          producer_namespace          | share_type | share_name | object_type |           object_name           |   include_new
   -------------------+--------------------------------------+------------+------------+-------------+---------------------------------+------------------
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | table       | public.tickit_users_redshift    |     
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | table       | public.tickit_venue_redshift    |     
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | table       | public.tickit_category_redshift |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | table       | public.tickit_date_redshift     |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | table       | public.tickit_event_redshift    |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | table       | public.tickit_listing_redshift  |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | table       | public.tickit_sales_redshift    |
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | schema      | public                          |    
    123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | INBOUND    | salesshare | view        | public.sales_data_summary_view  |
   ```

   Only cluster superusers can do this. You can also use SVV\$1DATASHARES to view the datashares and SVV\$1DATASHARE\$1OBJECTS to view the objects within the datashare.

   The following example displays the inbound datashares in a consumer cluster.

   ```
   SHOW DATASHARES LIKE 'sales%';
   
   
    share_name | share_owner | source_database | consumer_database | share_type | createdate | is_publicaccessible | share_acl | producer_account |          producer_namespace
   ------------+-------------+-----------------+-------------------+------------+------------+---------------------+-----------+------------------+--------------------------------------
    salesshare |             |                 |                   | INBOUND    |            |         t           |           |   123456789012   | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d
   ```

1. As a database superuser, you can create local databases that reference to the datashares. For more information, see [CREATE DATABASE](r_CREATE_DATABASE.md).

   ```
   CREATE DATABASE sales_db FROM DATASHARE salesshare OF NAMESPACE '13b8833d-17c6-4f16-8fe4-1a018f5ed00d';
   ```

   If you want more granular control over access to the objects in the local database, use the WITH PERMISSIONS clause when creating the database. This lets you grant object-level permissions for objects in the database in step 4. 

   ```
   CREATE DATABASE sales_db WITH PERMISSIONS FROM DATASHARE salesshare OF NAMESPACE '13b8833d-17c6-4f16-8fe4-1a018f5ed00d';
   ```

   You can see databases that you created from the datashare by querying the [SVV\$1REDSHIFT\$1DATABASES](r_SVV_REDSHIFT_DATABASES.md) view. You can connect to these databases directly, or you can connect to a local database on your consumer cluster and perform a cross-database query to query the data from the datashare databases. 
**Note**  
You can't create a datashare on top of database objects created from an existing datashare. However, you can copy the data into a separate table on the consumer cluster, perform any processing needed, and then share the new objects that were created.

   You can also use the Amazon Redshift console to create databases from datashares. For more information, see [Creating databases from datashares](query-datashare-console.md#create-database-from-datashare-console).

------

# Granting object level permissions to consumer users and roles in Amazon Redshift
Granting object level permissions

As a consumer administrator, you can grant permissions to consumer users and roles at the object level by completing the following steps. 

------
#### [ SQL ]

If you created your database without WITH PERMISSIONS, you can only assign permissions on the entire database created from the datashare to your users and roles. 

```
GRANT USAGE ON DATABASE sales_db TO Bob;
```

```
GRANT USAGE ON SCHEMA sales_schema TO ROLE Analyst_role;
```

You can also create late-binding views on top of shared objects and use these to assign granular permissions. You can also consider having producer clusters create additional datashares for you with the granularity required. 

If you created your database with WITH PERMISSIONS, you must assign object-level permissions for objects in the shared database. A user with only the USAGE permission can’t access any objects in a database created with WITH PERMISSIONS until they’re granted additional object-level permissions..

```
GRANT SELECT ON sales_db.public.tickit_sales_redshift to Bob;
```

------

For more information about granting permissions with multi-warehouse writes, see [Managing permissions for a datashares in Amazon Redshift](writes-managing-permissions.md).

# Querying data in a datashare in Amazon Redshift
Querying data in a datashare

With Amazon Redshift, you can query data across datashares from producer clusters to securely access live data without copying or transferring it. The following section covers querying datashares in your Amazon Redshift environment.

Users and roles with permissions on consumer databases and schemas on consumer clusters can explore and navigate the metadata of any shared objects. They can also explore and navigate local objects in a consumer cluster. To do this, they use JDBC or ODBC drivers or SVV\$1ALL and SVV\$1REDSHIFT views.

Producer clusters might have many schemas in the database, tables, and views within each schema. The users on the consumer side can see only the subset of objects that are made available through the datashare. These users can't see the entire metadata from the producer cluster. This approach helps provide granular metadata security control with data sharing.

You continue to connect to local cluster databases. But now, you can also read from the databases and schemas that are created from the datashare using the three-part database.schema.table notation. You can perform queries that span across any and all databases that are visible to you. These can be local databases on the cluster or databases created from the datashares. Consumer clusters can't connect to the databases created from the datashares.

You can access the data using full qualification. For more information, see [Cross-database query examples](cross-database_example.md).

------
#### [ SQL ]

```
SELECT * FROM sales_db.public.tickit_sales_redshift ORDER BY 1,2 LIMIT 5;

 salesid | listid | sellerid | buyerid | eventid | dateid | qtysold | pricepaid | commission |      saletime
---------+--------+----------+---------+---------+--------+---------+-----------+------------+---------------------
       1 |      1 |    36861 |   21191 |    7872 |   1875 |       4 |    728.00 |     109.20 | 2008-02-18 02:36:48
       2 |      4 |     8117 |   11498 |    4337 |   1983 |       2 |     76.00 |      11.40 | 2008-06-06 05:00:16
       3 |      5 |     1616 |   17433 |    8647 |   1983 |       2 |    350.00 |      52.50 | 2008-06-06 08:26:17
       4 |      5 |     1616 |   19715 |    8647 |   1986 |       1 |    175.00 |      26.25 | 2008-06-09 08:38:52
       5 |      6 |    47402 |   14115 |    8240 |   2069 |       2 |    154.00 |      23.10 | 2008-08-31 09:17:02
```

You can only use SELECT statements on shared objects. However, you can create tables in the consumer cluster by querying the data from the shared objects in a different local database.

------

# Producer actions for existing datashares in Amazon Redshift
Producer actions for existing datashares

With Amazon Redshift, you can manage existing datashares to control access to your data in an Amazon Redshift cluster. The following sections provide step-by-step guidance on modifying datashare objects, managing datashare permissions, and updating datashare properties to effectively control and audit data access in your Amazon Redshift environment.

**Topics**
+ [

# Viewing a datashare in Amazon Redshift
](writes-viewing.md)
+ [

# Editing datashares created in your account in Amazon Redshift
](writes-editing.md)
+ [

# Removing authorization from a datashare in Amazon Redshift
](writes-removing-authorization.md)
+ [

# Removing datashare objects from a datashare in Amazon Redshift
](writes-removing-datashare-object.md)
+ [

# Removing data consumers from a datashare in Amazon Redshift
](writes-removing-data-consumer.md)
+ [

# Deleting a datashare created in your account in Amazon Redshift
](writes-deleting.md)

# Viewing a datashare in Amazon Redshift
Viewing a datashare

You can view datashares from the console or with SQL. 

------
#### [ Console ]

You can view datashares from the **Datashares** or **Clusters** tab.
+ Use the **Datashares** tab to list datashares in your account or from other accounts.
  + To view datashares created in your account, choose **In my account**, then choose the datashare you want to view.
  + To view datashares that are shared from other accounts, choose **From other accounts**, then choose the datashare you want to view.
+ Use the **Clusters** tab to list datashares in your cluster or from other clusters.

  First, connect to a database. Then, choose a datashare either from the **Datashares from other clusters** or **Datashares created in my cluster** section to view its details.

------
#### [ SQL ]

You can list datashares created in the cluster and look into the contents of the datashare.

The following example displays the information of a datashare named `salesshare`. 

```
DESC DATASHARE salesshare;
               
 producer_account  |          producer_namespace          | share_type | share_name | object_type |           object_name          |   include_new
-------------------+--------------------------------------+------------+------------+-------------+--------------------------------+-------------------
 123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | table       | public.tickit_users_redshift   |   
 123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | table       | public.tickit_venue_redshift   |
 123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | table       | public.tickit_category_redshift|
 123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | table       | public.tickit_date_redshift    |
 123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | table       | public.tickit_event_redshift   |
 123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | table       | public.tickit_listing_redshift |
 123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | table       | public.tickit_sales_redshift   |
 123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | schema      | public                         |  t
 123456789012      | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d | OUTBOUND   | salesshare | view        | public.sales_data_summary_view |
```

The following example displays the outbound datashares in a producer cluster.

```
SHOW DATASHARES LIKE 'sales%';
```

The output looks similar to the following.

```
share_name | share_owner  | source_database | consumer_database | share_type |     createdate      | is_publicaccessible  | share_acl | producer_account |          producer_namespace 
-----------+--------------+-----------------+-------------------+------------+---------------------+----------------------+-----------+------------------+---------------------------------------
salesshare |    100       | dev             |                   |  OUTBOUND  | 2020-12-09 02:27:08 |          True        |           |   123456789012   | 13b8833d-17c6-4f16-8fe4-1a018f5ed00d
```

For more information, see [DESC DATASHARE](r_DESC_DATASHARE.md) and [SHOW DATASHARES](r_SHOW_DATASHARES.md). 

You can also use [SVV\$1DATASHARES](r_SVV_DATASHARES.md), [SVV\$1DATASHARE\$1CONSUMERS](r_SVV_DATASHARE_CONSUMERS.md), and [SVV\$1DATASHARE\$1OBJECTS](r_SVV_DATASHARE_OBJECTS.md) to view the datashares, the objects within the datashare, and the datashare consumers.

------

# Editing datashares created in your account in Amazon Redshift
Editing a datashare

You can edit a datashare created in your account using the console and with SQL.

------
#### [ Console ]

On the console, take the following steps to connect to a database first to see the list of datashares created in your account.

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Clusters**, then choose your cluster. The cluster details page appears.

1. Choose **Datashares**.

1. In the **Datashares created in my account** section, choose **Connect to database**.

1. Choose the datashare you want to edit, then choose **Edit**. The datashare details page appears.

1. Make any changes in the **Datashare objects** or **Data consumers** section.

1. Choose **Save changes**. Amazon Redshift updates your datashare with the changes.
**Note**  
If you chose to publish your datashare to the AWS Glue Data Catalog, you can't edit the configuration to publish the datashare to other Amazon Redshift accounts.

------
#### [ SQL ]

Use ALTER DATASHARE to remove objects from datashares at any point from the datashare. To remove a schema, use the following command:

```
ALTER DATASHARE salesshare REMOVE SCHEMA PUBLIC;
```

To remove a table, use the following command:

```
ALTER DATASHARE salesshare REMOVE TABLE public.tickit_sales_redshift;
```

Use REVOKE USAGE ON to revoke permissions on the datashare to certain consumers. It revokes USAGE permissions on objects within a datashare and instantly stops access to all consumer clusters. Listing datashares and the metadata queries, such as listing databases and tables, doesn't return the shared objects after access is revoked. Revoke access to the datashare from namespaces if you don't want to share the data with the consumers anymore.

```
REVOKE USAGE ON DATASHARE salesshare FROM NAMESPACE '13b8833d-17c6-4f16-8fe4-1a018f5ed00d';
```

If you don't want to share the data with the consumers anymore, revoke access to the datashare from AWS accounts:

```
REVOKE USAGE ON DATASHARE salesshare FROM ACCOUNT '123456789012';
```

------

# Removing authorization from a datashare in Amazon Redshift
Removing authorization from a datashare

With Amazon Redshift, you can control access to datashares by revoking authorization for specified consumers. This sections provides instructions for revoking consumer access to your datashares in Amazon Redshift.

**Note**  
To remove authorization for the datashare, there must be at least one data consumer added to the datashare.

------
#### [ Console ]

Choose one or more consumer clusters that you want to remove authorization from. Then, choose **Remove authorization**.

After authorization is removed, data consumers lose access to the datashare immediately.

------
#### [ API ]

The producer security administrator determines the following:
+ Whether or not another account can have access to the datashare.
+ If an account has access to the datashare, whether or not that account has write permissions.

The following IAM permissions are required to deauthorize a datashare: 

**redshift:DeauthorizeDataShare**

You can deauthorize usage and writes using either a CLI call or with the API:

```
deauthorize-data-share
--data-share-arn <value>
--consumer-identifier <value>
```

For more information about the command, see [deauthorize-data-share](https://docs.aws.amazon.com/cli/latest/reference/redshift/deauthorize-data-share.html).

------

# Removing datashare objects from a datashare in Amazon Redshift
Removing datashare objects from a datashare

You can remove one or more objects from a datashare by using the following procedure.

------
#### [ Console ]

To remove one or more objects from a datashare on the consoe, follow these steps.

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Clusters**, then choose your cluster. The cluster details page appears.

1. Choose **Datashares**.

1. In the **Datashares created in my account** section, choose **Connect to database**. For more information, see [Connecting to a database](connect-database-console.md).

1. Choose the datashare you want to edit, then choose **Edit**. The datashare details page appears.

1. To remove one or more datashare objects to the datashare, do one of the following:
   + To remove schemas from the datashare, choose one or more schemas. Then choose **Remove**. Amazon Redshift removes the specified schemas and all the objects of the specified schemas from the datashare.
   + To remove tables and views from the datashare, choose one or more tables and views. Then choose **Remove**. Alternatively, choose **Remove by schema** to remove all tables and views in the specified schemas.
   + To remove user-defined functions from the datashare, choose one or more user-defined functions. Then choose **Remove**. Alternatively, choose **Remove by schema** to remove all user-defined functions in the specified schemas.

------
#### [ SQL ]

Use ALTER DATASHARE to remove objects from datashares at any point from the datashare. To remove a schema, use the following command:

```
ALTER DATASHARE salesshare REMOVE SCHEMA PUBLIC;
```

To remove a table, use the following command:

```
ALTER DATASHARE salesshare REMOVE TABLE public.tickit_sales_redshift;
```

------

# Removing data consumers from a datashare in Amazon Redshift
Removing data consumers from a datashare

You can remove one or more data consumers from a datashare. Data consumers can be namespaces that uniquely identified Amazon Redshift clusters or AWS accounts.

------
#### [ Console ]

To remove one or more data consumers from a datashare on the console, choose one or more data consumers either from the namespace IDs or AWS account. Then, choose **Remove**.

Amazon Redshift removes the specified data consumers from the datashare. They lose access to the datashare immediately.

------
#### [ SQL ]

Use REVOKE USAGE ON to revoke permissions on the datashare to certain consumers. It revokes USAGE permissions on objects within a datashare and instantly stops access to all consumer clusters. Listing datashares and the metadata queries, such as listing databases and tables, doesn't return the shared objects after access is revoked. Revoke access to the datashare from namespaces if you don't want to share the data with the consumers anymore.

```
REVOKE USAGE ON DATASHARE salesshare FROM NAMESPACE '13b8833d-17c6-4f16-8fe4-1a018f5ed00d';
```

------

# Deleting a datashare created in your account in Amazon Redshift
Deleting a datashare

You can delete a datashare created in your account using the console or with SQL.

------
#### [ Console ]

To delete a datashare created in your account using the console, first connect to a database to see the list of datashares created in your account.

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Clusters**, then choose your cluster. The cluster details page appears.

1. Choose **Datashares**. The datashare list appears.

1. In the **Datashares created in my account** section, choose **Connect to database**.

1. Choose one or more datashares you want to delete, then choose **Delete**. The Delete datashares page appears.

   Deleting a datashare shared with Lake Formation doesn't automatically remove the associated permissions in Lake Formation. To remove them, go to the Lake Formation console.

1. Type **Delete** to confirm deleting the specified datashares.

1. Choose **Delete**.

After datashares are deleted, datashare consumers lose access to the datashares.

------
#### [ SQL ]

You can use SQL to delete the datashare objects at any point using [DROP DATASHARE](r_DROP_DATASHARE.md). Cluster superusers and owners of datashare can drop datashares.

The following example drops a datashare named `salesshare`.

```
DROP DATASHARE salesshare;
```

------

# Consumer actions for existing datashares in Amazon Redshift
Consumer actions for existing datashares

With Amazon Redshift, you can manage existing datashares to control access to your data in an Amazon Redshift cluster. The following sections provide step-by-step guidance on managing datashares in your Amazon Redshift environment as a consumer administrator.

**Topics**
+ [

# Managing permissions for a datashares in Amazon Redshift
](writes-managing-permissions.md)
+ [

# Removing association of a datashare from data consumers in a different AWS account in Amazon Redshift
](writes-disassociating-datashare.md)
+ [

# Declining a datashare from a different AWS account in Amazon Redshift
](writes-declining-datashare.md)

# Managing permissions for a datashares in Amazon Redshift
Managing permissions for a datashare

As a producer administrator, you retain control for the datasets you are sharing. You can add new objects to or remove them from the datashare. You can also grant or revoke access to datashares as a whole for the consumer clusters, AWS accounts, or AWS Regions. When permissions are revoked, consumer clusters immediately lose access to the shared objects and stop seeing them in the list of INBOUND datashares in SVV\$1DATASHARES.

The following example creates the datashare `salesshare`, adds the schema `public`, and adds the table `public.tickit_sales_redshift` to `salesshare`. It also grants usage permissions on `salesshare` to the specified namespace.

```
CREATE DATASHARE salesshare;
            
ALTER DATASHARE salesshare ADD SCHEMA public;

ALTER DATASHARE salesshare ADD TABLE public.tickit_sales_redshift; 

GRANT USAGE ON DATASHARE salesshare TO NAMESPACE '13b8833d-17c6-4f16-8fe4-1a018f5ed00d';
```

For CREATE DATASHARE, superusers and database owners can create datashares. For more information, see [CREATE DATASHARE](r_CREATE_DATASHARE.md). For ALTER DATASHARE, the owner of the datashare with the required permissions on the datashare objects to be added or removed can alter the datashare. For information, see [ALTER DATASHARE](r_ALTER_DATASHARE.md). 

As a producer administrator, when you drop a datashare, it stops being listed on consumer clusters. The databases and schema references created on the consumer cluster from the dropped datashare continue to exist with no objects in them. The consumer administrator must delete these databases manually.

On the consumer side, a consumer administrator can determine which users and roles should get access to the shared data by creating a database from the datashare. Depending on the options you choose when creating the database, you can control access to it as follows. For more information about creating a database from a datashare, see [CREATE DATABASE](r_CREATE_DATABASE.md).

For more information about setting up a datashare and reading data from a consumer, see [Sharing read access to data within an AWS account](https://docs.aws.amazon.com/redshift/latest/dg/within-account.html).

**Creating the database without the WITH PERMISSIONS clause**  
An administrator can control access at the database or schema level. To control access at the schema level, the administrator must create an external schema from the Amazon Redshift database created from the datashare. 

The following example grants permissions to access a shared table at the database level and schema level.

```
GRANT USAGE ON DATABASE sales_db TO Bob;

CREATE EXTERNAL SCHEMA sales_schema FROM REDSHIFT DATABASE sales_db SCHEMA 'public';

GRANT USAGE ON SCHEMA sales_schema TO ROLE Analyst_role;
```

To further restrict access, you can create views on top of shared objects, exposing only the necessary data. You can then use these views to give access to the users and roles.

After the users are granted access to the database or schema, they will have access to all shared objects in that database or schema.

**Creating the database with the WITH PERMISSIONS clause**  
After granting usage rights on the database or schema, an administrator can further control access using the same permission granting process as they would on a local database or schema. Without individual object permissions, users can’t access any objects in the datashared database or schema even after being granted the USAGE permission.

The following example grants permissions to access a shared table at the database level.

```
GRANT USAGE ON DATABASE sales_db TO Bob;
GRANT USAGE FOR SCHEMAS IN DATABASE sales_db TO Bob;
GRANT SELECT ON sales_db.public.tickit_sales_redshift TO Bob;
```

After being granted access to the database or schema, users still need to be given the relevant permissions for any objects in the database or schema that you want them to access. 

## Granular sharing using WITH PERMISSIONS


You can use granular sharing using WITH PERMISSIONS to enable clusters or Serverless workgroups to query the datashare. This process assumes the datashare is originating from another cluster or Amazon Redshift Serverless namespace in your account, or it is coming from another account and has been associated with the namespace you are using.

1. The consumer database administrator can create a database from the datashare.

   ```
   CREATE DATABASE my_ds_db [WITH PERMISSIONS] FROM DATASHARE my_datashare OF NAMESPACE 'abc123def';
   ```

   If you create a database WITH PERMISSIONS, you can grant granular permissions on datashare objects to different users and roles. Without this, all users and roles granted USAGE permission on the datashare database are granted all permissions on all objects within the datashare database.

1. The following shows how to grant permissions to a Redshift database user or role. You must be connected to a local database to run these statements. You cannot run these statements if you execute a USE command on the datashare database before running the grant statements.

   ```
   GRANT USAGE ON DATABASE my_ds_db TO ROLE data_eng;
   GRANT CREATE, USAGE ON SCHEMA my_ds_db.my_shared_schema TO ROLE data_eng;
   GRANT ALL ON ALL TABLES IN SCHEMA my_ds_db.my_shared_schema TO ROLE data_eng;
    
   GRANT USAGE ON DATABASE my_ds_db TO bi_user;
   GRANT USAGE ON SCHEMA my_ds_db.my_shared_schema TO bi_user;
   GRANT SELECT ON my_ds_db.my_shared_schema.table1 TO bi_user;
   ```

# Removing association of a datashare from data consumers in a different AWS account in Amazon Redshift
Removing association of a datashare from data consumers

With Amazon Redshift, you can remove association from datashares shared by other AWS accounts. Datashares are shareable database objects that encapsulate data from one or more Redshift databases. The following sections demonstrate the process of disassociating datashares within your Redshift environment.

------
#### [ Console ]

As a consumer administrator, you can remove association of datashares from data consumers on the console.

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Datashares**. The datashare list page appears.

1. Choose **From other accounts**.

1. In the **Datashares from other accounts** section, choose the datashare to remove association from data consumers.

1. In the **Data consumers** section, choose one or more data consumers to remove association from. Then choose **Remove association**.

1. When the Remove association page appears, choose **Remove association**.

After association is removed, data consumers will lose access to the datashare. You can change the data consumer association at any time.

------
#### [ SQL ]

**Note**  
The steps in this section are performed after the producer administrator grants specific actions on the shared database objects and, if the datashare is being shared with another account, the producer security administrator authorizes access.

The consumer security administrator can disassociate the datashare with the following command:

```
disassociate-data-share-consumer
--data-share-arn <value>
```

For more information about the command, see [disassociate-data-share-consumer](https://docs.aws.amazon.com/cli/latest/reference/redshift/disassociate-data-share-consumer.html).

------

# Declining a datashare from a different AWS account in Amazon Redshift
Declining a datashare

With Amazon Redshift, you can decline datashares shared by other AWS accounts, enabling seamless and secure data sharing across organizational boundaries. Datashares are shareable database objects that encapsulate data from one or more Amazon Redshift databases. 

As a consumer administrator, you can reject any datashare whose state is available or active. After you reject a datashare, consumer cluster users lose access to the datashare. Amazon Redshift doesn't return the rejected datashare if you call the `DescribeDataSharesForConsumer` API operation. If the producer administrator runs the `DescribeDataSharesForProducer` API operation, they will see that the datashare was rejected. Once a datashare is rejected, the producer administrator can authorize the datashare to a consumer cluster again, and the consumer administrator can choose to associate their AWS account with the datashare or reject it. 

If your AWS account has an association to a datashare and a pending association to a datashare that's managed by Lake Formation, rejecting the datashare association that's managed by Lake Formation also rejects the original datashare. To reject a specific association, the producer administrator can remove authorization from a specified datashare. This action doesn't affect other datashares.

To reject a datashare, use the AWS console, the API operation `RejectDataShare`, or `reject-datashare` in the AWS CLI.

------
#### [ Console ]

To reject a datashare using the AWS console, perform the following steps.

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. In the navigation menu, choose **Datashares**.

1. Choose **From other accounts**.

1. In the **Datashares from other accounts** section, choose the datashare you want to decline. When the **Decline datashare** page appears, choose **Decline**.

After you decline the datashares, you can't revert the change. Amazon Redshift removes the datashares from the list. To see the datashare again, the producer administrator must authorize it again.

------
#### [ CLI ]

To reject a datashare, the administrator uses the following command: 

```
reject-data-share
--data-share-arn <value>
```

For more information about the command, see [reject-data-share](https://docs.aws.amazon.com/cli/latest/reference/redshift/reject-data-share.html). 

------

# Getting started with data sharing with AWS CloudFormation in Amazon Redshift
Getting started with CloudFormation

You can automate data sharing setup by using an AWS CloudFormation stack, which provisions AWS resources. The CloudFormation stack sets up data sharing between two Amazon Redshift clusters in the same AWS account. Thus, you can start data sharing without running SQL statements to provision your resources.

The stack creates a datashare on the cluster that you designate. The datashare includes a table and sample read-only data. This data can be read by your other Amazon Redshift cluster.

If you want to start sharing data in an AWS account by running SQL statements to set up a datashare and grant permissions, without using CloudFormation, see [Sharing read access to data within an AWS account](within-account.md).

Before running the data sharing CloudFormation stack, you must be logged in with a user that has permission to create an IAM role and a Lambda function. You also need two Amazon Redshift clusters in the same account. You use one, the *producer*, to share the sample data, and the other, the *consumer*, to read it. The primary requirement for these clusters is that each use RA3 nodes. For additional requirements, see [Considerations for data sharing in Amazon Redshift](datashare-considerations.md).

For more information about getting started setting up an Amazon Redshift cluster, see [Get started with Amazon Redshift provisioned data warehouses](https://docs.aws.amazon.com/redshift/latest/gsg/new-user.html). For more information about automating setup with CloudFormation, see [What is AWS CloudFormation?](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html)

**Important**  
Before launching your CloudFormation stack, make sure you have two Amazon Redshift clusters in the same account and that the clusters use RA3 nodes. Make sure each cluster has a database and a superuser. For more information, see [CREATE DATABASE](r_CREATE_DATABASE.md) and [Superusers](r_superusers.md).

**To launch your CloudFormation stack for Amazon Redshift data sharing:**

1. Click [https://console.aws.amazon.com/cloudformation/home?#/stacks/new?stackName=DataShare&templateURL=https://s3.amazonaws.com/redshift-downloads/docs-downloads/DataShare.yml](https://console.aws.amazon.com/cloudformation/home?#/stacks/new?stackName=DataShare&templateURL=https://s3.amazonaws.com/redshift-downloads/docs-downloads/DataShare.yml), which takes you to the CloudFormation service in the AWS Management Console.

   If you are prompted, sign in. 

   The stack creation process starts, referencing a CloudFormation template file, which is stored in Amazon S3. A CloudFormation *template* is a text file in JSON format that declares AWS resources that make up a stack. For more information about CloudFormation templates, see [Learn template basics](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/gettingstarted.templatebasics.html).

1. Choose **Next** to enter the stack details.

1. Under **Parameters**, for each cluster, enter the following:
   + Your Amazon Redshift cluster name, for example **ra3-consumer-cluster**
   + Your database name, for example **dev**
   + The name of your database user, for example **consumeruser**

   We recommend using test clusters, because the stack creates several database objects.

   Choose **Next**. 

1. The stack options appear. 

   Choose **Next** to accept the default settings.

1. Under **Capabilities**, choose **I acknowledge that AWS CloudFormation might create IAM resources.**

1. Choose **Create stack**. 

CloudFormation takes about 10 minutes to build the Amazon Redshift stack using the template, creating a datashare called `myproducer_share`. The stack creates the datashare in the database specified in the stack details. Only objects from that database can be shared.

If an error occurs while the stack is created, do the following:
+ Make sure that you entered the correct cluster name, database name, and database user name for each Redshift cluster. 
+ Make sure that your cluster has RA3 nodes.
+ Make sure you are logged in with a user that has permission to create an IAM role and a Lambda function. For more information about creating IAM roles, see [Creating IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create.html). For more information about policies for Λ function creation, see [Function development](https://docs.aws.amazon.com/lambda/latest/dg/access-control-identity-based.html#permissions-user-function).

## Querying the datashare that you created


To use the following procedure, make sure that you have the required permissions for running queries on each cluster described.

**To query your datashare:**

1. Connect to the producer cluster on the database entered when your CloudFormation stack was created, using a client tool such as the Amazon Redshift query editor v2.

1. Query for datashares.

   ```
   SHOW DATASHARES;
                     
   +------------------+-------------+-----------------+-------------------+------------+------------+---------------------+-----------+------------------+--------------------------------------+
   |    share_name    | share_owner | source_database | consumer_database | share_type | createdate | is_publicaccessible | share_acl | producer_account |          producer_namespace          |
   +------------------+-------------+-----------------+-------------------+------------+------------+---------------------+-----------+------------------+--------------------------------------+
   | myproducer_share | 100         | sample_data_dev | myconsumer_db     | INBOUND    | NULL       | true                | NULL      |  producer-acct   |        your-producer-namespace       |
   +------------------+-------------+-----------------+-------------------+------------+------------+---------------------+-----------+------------------+--------------------------------------+
   ```

   The preceding command returns the name of the datashare created by the stack, called `myproducer_share`. It also returns the name of the database associated with the datashare, `myconsumer_db`. 

   Copy the producer namespace identifier to use in a later step.

1. Describe objects in the datashare.

   ```
   DESC DATASHARE myproducer_share;
                     
   +------------------+--------------------------------------+------------+------------------+-------------+-------------------------------------+-------------+
   | producer_account |          producer_namespace          | share_type |    share_name    | object_type |             object_name             | include_new |
   +------------------+--------------------------------------+------------+------------------+-------------+-------------------------------------+-------------+
   |   producer-acct  |        your-producer-namespace       | OUTBOUND   | myproducer_share | schema      | myproducer_schema                   | true        |
   |   producer-acct  |        your-producer-namespace       | OUTBOUND   | myproducer_share | table       | myproducer_schema.tickit_sales      | NULL        |
   |   producer-acct  |        your-producer-namespace       | OUTBOUND   | myproducer_share | view        | myproducer_schema.ticket_sales_view | NULL        |
   +------------------+--------------------------------------+------------+------------------+-------------+-------------------------------------+-------------+
   ```

   When you describe the datashare, it returns properties for tables and views. The stack adds tables and views with sample data to the producer database, for example `tickit_sales` and `tickit_sales_view`. For more information about the TICKIT sample database, see [Sample database](c_sampledb.md).

   You don't have to delegate permissions on the datashare to run queries. The stack grants the necessary permissions.

1. Connect to the consumer cluster using your client tool. Describe the datashare, specifying the producer's namespace.

   ```
   DESC DATASHARE myproducer_share OF NAMESPACE '<namespace id>'; --specify the unique identifier for the producer namespace
                     
   +------------------+--------------------------------------+------------+------------------+-------------+-------------------------------------+-------------+
   | producer_account |          producer_namespace          | share_type |    share_name    | object_type |             object_name             | include_new |
   +------------------+--------------------------------------+------------+------------------+-------------+-------------------------------------+-------------+
   |   producer-acct  |        your-producer-namespace       | INBOUND    | myproducer_share | schema      | myproducer_schema                   | NULL        |
   |   producer-acct  |        your-producer-namespace       | INBOUND    | myproducer_share | table       | myproducer_schema.tickit_sales      | NULL        |
   |   producer-acct  |        your-producer-namespace       | INBOUND    | myproducer_share | view        | myproducer_schema.ticket_sales_view | NULL        |
   +------------------+--------------------------------------+------------+------------------+-------------+-------------------------------------+-------------+
   ```

1. You can query tables in the datashare by specifying the datashare's database and schema. For more information, see [Cross-database query examples](cross-database_example.md). The following queries return sales and seller data from the SALES table in the TICKIT sample database. For more information, see [Sample database](c_sampledb.md).

   ```
   SELECT * FROM myconsumer_db.myproducer_schema.tickit_sales_view;
                     
   +---------+--------+----------+---------+---------+--------+---------+-----------+------------+---------------------+
   | salesid | listid | sellerid | buyerid | eventid | dateid | qtysold | pricepaid | commission |      saletime       |
   +---------+--------+----------+---------+---------+--------+---------+-----------+------------+---------------------+
   |       1 |      1 |    36861 |   21191 |    7872 |   1875 |       4 |       728 |      109.2 | 2008-02-18 02:36:48 |
   |       2 |      4 |     8117 |   11498 |    4337 |   1983 |       2 |        76 |       11.4 | 2008-06-06 05:00:16 |
   |       3 |      5 |     1616 |   17433 |    8647 |   1983 |       2 |       350 |       52.5 | 2008-06-06 08:26:17 |
   |       4 |      5 |     1616 |   19715 |    8647 |   1986 |       1 |       175 |      26.25 | 2008-06-09 08:38:52 |
   |       5 |      6 |    47402 |   14115 |    8240 |   2069 |       2 |       154 |       23.1 | 2008-08-31 09:17:02 |
   +---------+--------+----------+---------+---------+--------+---------+-----------+------------+---------------------+
   ```
**Note**  
The query runs against the view in the shared schema. You can't connect directly to databases created from datashares. They are read-only.

1. To run a query that includes aggregations, use the following example.

   ```
   SELECT * FROM myconsumer_db.myproducer_schema.tickit_sales ORDER BY 1,2 LIMIT 5;
                     
   +---------+--------+----------+---------+---------+--------+---------+-----------+------------+---------------------+
   | salesid | listid | sellerid | buyerid | eventid | dateid | qtysold | pricepaid | commission |      saletime       |
   +---------+--------+----------+---------+---------+--------+---------+-----------+------------+---------------------+
   |       1 |      1 |    36861 |   21191 |    7872 |   1875 |       4 |       728 |      109.2 | 2008-02-18 02:36:48 |
   |       2 |      4 |     8117 |   11498 |    4337 |   1983 |       2 |        76 |       11.4 | 2008-06-06 05:00:16 |
   |       3 |      5 |     1616 |   17433 |    8647 |   1983 |       2 |       350 |       52.5 | 2008-06-06 08:26:17 |
   |       4 |      5 |     1616 |   19715 |    8647 |   1986 |       1 |       175 |      26.25 | 2008-06-09 08:38:52 |
   |       5 |      6 |    47402 |   14115 |    8240 |   2069 |       2 |       154 |       23.1 | 2008-08-31 09:17:02 |
   +---------+--------+----------+---------+---------+--------+---------+-----------+------------+---------------------+
   ```

   The query returns sales and seller data from the sample TICKIT data.

   For more examples of datashare queries, see [Sharing read access to data within an AWS account](within-account.md).

# Types of datashares in Amazon Redshift
Types of datashares

A *datashare* is the unit of sharing data in Amazon Redshift. Use datashares to share data in the same AWS account or different AWS accounts. Also, share data for read purposes across different Amazon Redshift clusters.

Each datashare is associated with a specific database in your Amazon Redshift cluster.

A producer administrator can create datashares and add datashare objects to share data with other clusters, referred to as *outbound* shares. A consumer administrator can receive datashares from other clusters, referred to as *inbound* shares. For details on producers and consumers, see [Datashare producers and consumers](adx_datashare_overview.md#datashare_producer_consumer).

Datashare objects are objects from specific databases on a cluster that producer administrators can add to datashares to be shared with data consumers. Datashare objects are read-only for data consumers. Examples of datashare objects are tables, views, and user-defined functions. You can add datashare objects to datashares while creating datashares or editing a datashare at any time.

Data sharing continues to work when clusters are resized or when the producer cluster is paused.

There are different types of datashares: standard datashares, AWS Data Exchange datashares, and AWS Lake Formation-managed datashares. The following pages provide an overview of each of these.

# Standard datashares
Standard datashares

With standard datashares, you can share data across provisioned clusters, serverless workgroups, Availability Zones, AWS accounts, and AWS Regions. You can share between cluster types as well as between provisioned clusters and Amazon Redshift Serverless.

To share data, note the following provisioned cluster, serverless namespace, and AWS account identifiers:
+ Provisioned namespaces are identifiers that identify Amazon Redshift provisioned clusters. A namespace globally unique identifier (GUID) is automatically created during provisioned cluster creation and attached to the cluster. A namespace Amazon Resource Name (ARN) is in the arn:\$1partition\$1:redshift:\$1region\$1:\$1account-id\$1:namespace:\$1namespace-guid\$1 format. You can see the namespace of a provisioned cluster on the cluster details page on the Amazon Redshift console.

  In the data sharing workflow, the namespace GUID value and the namespace ARN are used to share data with clusters in the AWS account. You can also find the namespace for the current cluster by using the `current_namespace` function.
+ Serverless namespaces are identifiers that identify Amazon Redshift Serverless. A namespace globally unique identifier (GUID) is automatically created during Amazon Redshift Serverless creation and attached to the instance. A serverless namespace ARN is in the arn:\$1partition\$1:redshift-serverless:\$1region\$1:\$1account-id\$1:namespace/\$1namespace-guid\$1 format.
+ AWS accounts can be consumers for datashares and are each represented by a 12-digit AWS account ID.

For *standard datashares*, consider the following:
+ When a producer cluster is deleted, Amazon Redshift deletes the datashares created by the producer cluster. When a producer cluster is backed up and restored, the created datashares still persist on the restored cluster. If the restore is performed on the same serverless namespace, datashare permissions might be preserved. See [Considerations for data sharing in Amazon Redshift Serverless restore](considerations-datashare-serverless-restore.md) for supported scenarios. For other scenarios, datashare permissions granted to consumer clusters are no longer valid on the restored cluster. Re-grant usage permissions of datashares to desired consumer clusters. The consumer database on the consumer cluster points to the datashare from the original cluster where the snapshot is taken. To query the shared data from the restored cluster, the consumer administrator creates a different database. Or the administrator can drop and recreate an existing consumer database to use the datashare from the newly restored cluster.
+ When a consumer cluster is deleted and restored from a snapshot, the previous access shared to this cluster would no longer be valid and visible. If access to datashares is still required on the restored consumer cluster, the producer administrator must grant usage of datashares to the restored consumer cluster again. The consumer administrator must drop any stale consumer databases created from the inactive datashares. Then the administrator must recreate the consumer database from the datashare, after the producer re-granted the permissions. As the namespace GUID is different on a restored cluster from the original cluster, re-grant datashare permissions when the consumer or producer cluster is restored from backup.

# AWS Data Exchange datashares
AWS Data Exchange datashares

You can use AWS Data Exchange datashares to manage billing for Amazon Redshift data sharing.

An AWS Data Exchange datashare is a unit of licensing for sharing your data through AWS Data Exchange. AWS manages all billing and payments associated with subscriptions to AWS Data Exchange and use of Amazon Redshift data sharing. Approved data providers can add AWS Data Exchange datashares to AWS Data Exchange products. When customers subscribe to a product with AWS Data Exchange datashares, they get access to the datashares in the product.

*AWS Data Exchange for Amazon Redshift* makes it convenient to license access to your Amazon Redshift data through AWS Data Exchange. When a customer subscribes to a product with AWS Data Exchange datashares, AWS Data Exchange automatically adds the customer as a data consumer on all AWS Data Exchange datashares included with the product. Invoices are automatically generated, and payments are centrally collected and automatically disbursed through AWS Marketplace Entitlement Service.

Providers can license data in Amazon Redshift at a granular level, such as schemas, tables, views, and user-defined functions. You can use the same AWS Data Exchange datashare across multiple AWS Data Exchange products. Any objects added to the AWS Data Exchange datashare is available to consumers. Producers can view all AWS Data Exchange datashares managed by AWS Data Exchange on their behalf using Amazon Redshift API operations, SQL commands, and the Amazon Redshift console. Customers who subscribe to a product AWS Data Exchange datashares have read-only access to the objects in the datashares. 

Customers who want to consume third-party producer data can browse the AWS Data Exchange catalog to discover and subscribe to datasets in Amazon Redshift. After their AWS Data Exchange subscription is active, they can create a database from the datashare in their cluster and query the data in Amazon Redshift.

## How AWS Data Exchange datashares work


### Managing AWS Data Exchange datashares as a producer administrator
Managing AWS Data Exchange datashares as a producer administrator

 If you are a data producer (also known as a provider on AWS Data Exchange), you can create AWS Data Exchange datashares that connect to your Amazon Redshift databases. To add AWS Data Exchange datashares to products on AWS Data Exchange, you must be a registered AWS Data Exchange provider.

For more information on how to get started with AWS Data Exchange datashares, see [Sharing licensed Amazon Redshift data on AWS Data Exchange](adx-getting-started.md).

### Using AWS Data Exchange datashares as a consumer with an active AWS Data Exchange subscription
Using AWS Data Exchange datashares as a consumer with an active AWS Data Exchange subscription

If you are a consumer with an active AWS Data Exchange subscription (also known as a subscriber on AWS Data Exchange), you can browse the AWS Data Exchange catalog on the AWS Data Exchange console to discover products containing AWS Data Exchange datashares. 

After you subscribe to a product that contains AWS Data Exchange datashares, create a database from the datashare within your cluster. You can then query the data in Amazon Redshift directly without extracting, transforming, and loading the data. 

For more information on how to get started with AWS Data Exchange datashares, see [Sharing licensed Amazon Redshift data on AWS Data Exchange](adx-getting-started.md).

For *AWS Data Exchange datashares*, consider the following:
+ When a producer cluster is deleted, Amazon Redshift deletes the datashares created by the producer cluster. When a producer cluster is backed up and restored, the created datashares still persist on the restored cluster. For data subscribers to be able to continue accessing the data, create the AWS Data Exchange datashares again and publish them to the product's data sets. The consumer database on the consumer cluster points to the datashare from the original cluster where the snapshot is taken. To query the shared data from the restored cluster, the consumer administrator creates a different database, or drops and recreates an existing consumer database to use the newly created AWS Data Exchange datashare from the newly restored cluster.
+ We recommend that you don't delete your cluster if you have any AWS Data Exchange datashares. Performing this type of alteration can breach data product terms in AWS Data Exchange.

## Datashare producers and consumers
Datashare producers and consumers

*Data producers* (also known as data sharing producers or datashare producers) are clusters that you want to share data from. producer administrators and database owners can create datashares using the CREATE DATASHARE command. You can add objects such as schemas, tables, views, and SQL user-defined functions (UDFs) from a database that you want the producer cluster to share with consumer clusters.

Data producers (also known as providers on AWS Data Exchange) for AWS Data Exchange datashares can license data through AWS Data Exchange. Approved providers can add AWS Data Exchange datashares to AWS Data Exchange products. 

When a customer subscribes to a product with AWS Data Exchange datashares, AWS Data Exchange automatically adds the customer as a data consumer on all AWS Data Exchange datashares included with the product. AWS Data Exchange also removes all customers from AWS Data Exchange datashares when their subscription ends. AWS Data Exchange also automatically manages billing, invoicing, payment collection, and payment distribution for paid products with AWS Data Exchange datashares. For more information, see [AWS Data Exchange datashares](#adx_datashare_overview). To register as an AWS Data Exchange data provider, see [Getting started as a provider](https://docs.aws.amazon.com/data-exchange/latest/userguide/provider-getting-started.html).

*Data consumers* (also known as data sharing consumers or datashare consumers) are clusters that receive datashares from producer clusters. 

Amazon Redshift clusters that share data can be in the same or different AWS accounts or different AWS Regions, so you can share data across organizations and collaborate with other parties. consumer administrators receive the datashares that they are granted usage for and review the contents of each datashare. To consume shared data, the consumer administrator creates an Amazon Redshift database from the datashare. The administrator then assigns permissions for the database to users and roles in the consumer cluster. After permissions are granted, users and roles can list the shared objects as part of the standard metadata queries, along with the local data on the consumer cluster. They can start querying immediately.

If you are a *consumer with an active AWS Data Exchange subscription* (also known as subscribers on AWS Data Exchange), you can find, subscribe to, and query granular, up-to-date data in Amazon Redshift without the need to extract, transform, and load the data. For more information, see [AWS Data Exchange datashares](#adx_datashare_overview).

# AWS Lake Formation-managed datashares
AWS Lake Formation-managed datashares

With Amazon Redshift, you can access and share live data across AWS accounts and Amazon Redshift clusters through AWS Lake Formation-managed datashares. AWS Lake Formation datashares enable data providers to securely share live data from their Amazon S3 data lake with any consumer, including other AWS accounts and Amazon Redshift clusters.

 Using AWS Lake Formation, you can centrally define and enforce database, table, column, and row-level access permissions of Amazon Redshift datashares and restrict user access to objects within a datashare. By sharing data through Lake Formation, you can define permissions in Lake Formation and apply those permissions to any datashare and its objects. For example, if you have a table containing employee information, you can use Lake Formation's column-level filters to prevent employees who don't work in the HR department from seeing personally identifiable information (PII), such as a social security number. For more information about data filters, see [Data filtering and cell-level security in Lake Formation](https://docs.aws.amazon.com/lake-formation/latest/dg/data-filtering.html) in the *AWS Lake Formation Developer Guide*. 

You can also use tags in Lake Formation to configure permissions on Lake Formation resources. For more information, see [Lake Formation Tag-based access control](https://docs.aws.amazon.com/lake-formation/latest/dg/tag-based-access-control.html).

 Amazon Redshift currently supports data sharing via Lake Formation when sharing within the same account or across accounts. Cross-Region sharing is currently not supported. 

The following is a high-level overview of how to use Lake Formation to control datashare permissions:

1. In Amazon Redshift, the producer cluster or workgroup administrator creates a datashare on the producer cluster or workgroup and grants usage to a Lake Formation account.

1. The producer cluster or workgroup administrator authorizes the Lake Formation account to access the datashare.

1. The Lake Formation administrator discovers and registers the datashares. They must also discover the AWS Glue ARNs they have access to and associate the datashares with an AWS Glue Data Catalog ARN. If you're using the AWS CLI you can discover and accept datashares with the Redshift CLI operations `describe-data-shares` and `associate-data-share-consumer`. To register a datashare, use the Lake Formation CLI operation `register-resource`.

1. The Lake Formation administrator creates a federated database in the AWS Glue Data Catalog, and configures Lake Formation permissions to control user access to objects within the datashare. For more information about federated databases in AWS Glue, see [Managing permissions for data in an Amazon Redshift datashare](https://docs.aws.amazon.com/lake-formation/latest/dg/data-sharing-redshift.html).

1. The Lake Formation administrator discovers the AWS Glue databases they have access to and associates the datashare with an AWS Glue Data Catalog ARN.

1. The Redshift administrator discovers the AWS Glue database ARNs they have access to, creates an external database in the Amazon Redshift consumer cluster using a AWS Glue database ARN, and grants usage to [database users authenticated with IAM credentials](https://docs.aws.amazon.com/redshift/latest/mgmt/options-for-providing-iam-credentials.html) to start querying the Amazon Redshift database.

1. Database users can use the views SVV\$1EXTERNAL\$1TABLES and SVV\$1EXTERNAL\$1COLUMNS to find all of the tables or columns within the AWS Glue database that they have access to, and then they can query the AWS Glue database’s tables.

1. When the producer cluster or workgroup administrator decides to no longer share the data with the consumer cluster, the producer administrator can revoke usage, deauthorize, or delete the datashare from Redshift. The associated permissions and objects in Lake Formation are not automatically deleted.

For more information about sharing a datashare with AWS Lake Formation as a producer cluster or workgroup administrator, see [Working with Lake Formation-managed datashares as a producer](lake-formation-getting-started-producer.md). To consume the shared data from the producer cluster or workgroup, see [Working with Lake Formation-managed datashares as a consumer](lake-formation-getting-started-consumer.md).

# Datashare status values in Amazon Redshift
Datashare status

With Amazon Redshift, you can securely share live data across Amazon Redshift clusters without having to copy or transfer data. Datashares for Amazon Redshift enables you to share live query results, including updates to the source data, with any Amazon Redshift cluster in the same or different AWS accounts and AWS Regions. This topic describes the possible statuses that datashares can have in Amazon Redshift.

With cross-account datashares, there are different statuses of datashares that require your actions. Your datashare can have a status of active, action required, or inactive. 

Following describes each datashare status and its required action:
+ When a producer administrator creates a datashare, the datashare status on the producer cluster is **Pending authorization**. The producer administrator can authorize data consumers to access the datashare. There isn't any action for the consumer administrator.
+ When a producer administrator authorizes the datashare, the datashare status becomes **Authorized** on the producer cluster. There isn't any action for the producer administrator. When there is at least one association with a data consumer for the datashare, the datashare status changes from **Authorized** to **Active**.

  The datashare share status then becomes **Available (Action required on the Amazon Redshift console)** on the consumer cluster. The consumer administrator can associate the datashare with data consumers or reject the datashare. The consumer administrator can also use the AWS CLI command `describeDatashareforConsumer` to view the status of datashares. Or the administrator can use the CLI command `describeDatashare` and provide the datashare Amazon Resource Name (ARN) to view the status of the datashare.
+ When the consumer administrator associates a datashare with data consumers, the datashare status becomes **Active** on the producer cluster. When there is at least one association with a data consumer for the datashare, the datashare status changes from **Authorized** to **Active**. There isn't any action required for the producer administrator.

  The datashare status becomes **Active** on the consumer cluster. There isn't any action required for the consumer administrator.
+ When the consumer administrator removes a consumer association from a datashare, the datashare status becomes either **Active** or **Authorized**. It becomes **Active** when there is at least one association exists for the datashare with another data consumer. It becomes **Authorized** when there isn't any consumer association with the datashare on the producer cluster. There isn't any action for the producer administrator.

  The datashare status becomes **Action required** on the consumer cluster if all associations are removed. The consumer administrator can reassociate a datashare with data consumers when the datashare is available to the consumers.
+ When a consumer administrator declines a datashare, the datashare status on the producer cluster becomes **Action required** and **Declined** on the consumer cluster. The producer administrator can reauthorize the datashare. There isn't any action for the consumer administrator.
+ When the producer administrator removes authorization from a datashare, the datashare's status becomes **Action required** on the producer cluster. The producer administrator can choose to reauthorize the datashare, if necessary. There isn't any action required for the consumer administrator.

# Managing access to data sharing API operations with IAM policies
Managing access to data sharing API operations with IAM policies

To control the access to the data sharing API operations, use IAM action-based policies. For information about how to manage IAM policies, see [Managing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage.html) in the *IAM User Guide*.

For information on the permissions required to use the data sharing API operations, see [Permissions required to use the data sharing API operations](https://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-access-control-identity-based.html) in the *Amazon Redshift Management Guide*.

To make cross-account data sharing more secure, you can use a conditional key `ConsumerIdentifier` for the `AuthorizeDataShare` and `DeauthorizeDataShare` API operations. By doing this, you can explicitly control which AWS accounts can make calls to the two API operations.

You can deny authorizing or deauthorizing data sharing for any consumer that isn't your own account. To do so, specify the AWS account number in the IAM policy.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Deny",
            "Action": [
                "redshift:AuthorizeDataShare",
                "redshift:DeauthorizeDataShare"
            ],
            "Resource": "*",
            "Condition": {
                "StringNotEquals": {
                    "redshift:ConsumerIdentifier": "555555555555"
                }
            }
        }
    ]
}
```

------

You can allow a producer with a DataShareArn **testshare2** to explicitly share with a consumer with an AWS account of 111122223333 in the IAM policy.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "redshift:AuthorizeDataShare",
                "redshift:DeauthorizeDataShare"
            ],
            "Resource": "arn:aws:redshift:us-east-1:666666666666:datashare:af06285e-8a45-4ee9-b598-648c218c8ff1/testshare2",
            "Condition": {
                "StringEquals": {
                    "redshift:ConsumerIdentifier": "111122223333"
                }
            }
        }
    ]
}
```

------

# Connecting to consumer databases in Amazon Redshift
Connecting to consumer databases

With a direct connection to a datashare database, you can directly connect to a database created from a datashare in the same way as you can connect to any other type of Amazon Redshift database. For example, you can connect to a database created from a datashare using JDBC or ODBC drivers, the Amazon Redshift query editor v2, or any other tool that can connect to an Amazon Redshift database. For more information, see [Connecting to an Amazon Redshift data warehouse using SQL client tools](https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-to-cluster.html).

## Accessing shared data


When you connect to a database created from a datashare, you can query the shared objects using two-part notation (`schema_name.table_name`). If the table can be found in the consumer database search path, you can also use one-part notation (`table_name`). 

If you want to perform cross-database queries, you can use three-part notation (`consumer_database_name.schema_name.table_name`). These queries can reference shared objects from other consumer database on the cluster, or local objects from local databases. They can also reference both local databases and data shared from other clusters within the same query.

**Note**  
A database that is created from a datashare doesn't have a local catalog. Therefore, any queries that access local catalog tables, such as `pg_class`, return an empty result.

## Accessing metadata for shared objects


To help cluster administrators discover shared objects in the consumer database, Amazon Redshift provides a set of metadata views and SHOW commands that list the metadata for these objects. When you connect to a consumer database, these metadata views and commands don't support cross-database metadata discovery. They only return metadata for the shared objects in the datashare that are associated with the connected database.

Use SHOW SCHEMAS to view a list of the shared schemas in the datashare associated with the connected database. For more information, see [SHOW SCHEMAS](r_SHOW_SCHEMAS.md).

Use SHOW TABLES to view a list of the tables in a shared schema from the datashare associated with the connected database. For more information, see [SHOW TABLES](r_SHOW_TABLES.md).

Use SHOW COLUMNS to view a list of the columns from a shared table in the datashare associated with the connected database. For more information, see [SHOW COLUMNS](r_SHOW_COLUMNS.md).

Use SVV\$1ALL\$1SCHEMAS to view a list of the shared schemas in the datashare associated with the connected database. For more information, see [SVV\$1ALL\$1SCHEMAS](r_SVV_ALL_SCHEMAS.md).

Use SVV\$1ALL\$1TABLES to view a list of the shared tables in the datashare associated with the connected database. For more information, see [SVV\$1ALL\$1TABLES](r_SVV_ALL_TABLES.md).

Use SVV\$1ALL\$1COLUMNS to view a list of the shared columns in the datashare associated with the connected database. For more information, see [SVV\$1ALL\$1COLUMNS](r_SVV_ALL_COLUMNS.md).

## Integrating Amazon Redshift data sharing with business intelligence tools


To integrate data sharing with business intelligence (BI) tools, we recommend that you use the Amazon Redshift JDBC or ODBC drivers. Amazon Redshift JDBC and ODBC drivers support the `GetCatalogs` API operation in the drivers. This operation returns a list of all databases, including those created from datashares. 

The drivers also support downstream operations, such as `GetSchemas` and GetTables, that return data from all the databases that `GetCatalogs` returns. The drivers provide this support even when you don't explicitly specify the catalog in the call. For more information on JDBC or ODBC drivers, see [Configuring connections](https://docs.aws.amazon.com/redshift/latest/mgmt/configuring-connections.html) in the *Amazon Redshift Management Guide*.

The Amazon Redshift query editor v2 includes consumer databases in its connection switching interface. However, most tools exclude these databases and only include local cluster databases as connectable databases.

**Note**  
A new system database named `sys:internal` was added for internal maintenance. Some tools include this system database as a connectable database. However, you can't connect to it or run queries against its objects.

# Monitoring and auditing data sharing in Amazon Redshift
Monitoring and auditing data sharing

With Amazon Redshift, you can monitor and audit data sharing activities to ensure compliance and security.

By auditing data sharing, producers can track the datashare evolution. For example, auditing helps track when datashares are created, objects are added or removed, and permissions are granted or revoked to Amazon Redshift clusters, AWS accounts, or AWS Regions.

In addition to auditing, producers and consumers track datashare usage at various granularities, such as account, cluster, and object levels. For more information about tracking usage and auditing views, see [SVL\$1DATASHARE\$1CHANGE\$1LOG](r_SVL_DATASHARE_CHANGE_LOG.md) and [SVL\$1DATASHARE\$1USAGE\$1PRODUCER](r_SVL_DATASHARE_USAGE_PRODUCER.md).

You can monitor datashares by querying system views.

1. The producer administrator who wants to share data creates an Amazon Redshift datashare. The producer administrator then adds the needed database objects. These might be schemas, tables, and views to the datashare and specifies a list of consumers that the objects to be shared with. 

   Use the following system views to see consolidated views for tracking changes to and usage of datashares on producer and/or consumer clusters:
   + [SYS\$1DATASHARE\$1CHANGE\$1LOG](SYS_DATASHARE_CHANGE_LOG.md)
   + [SYS\$1DATASHARE\$1USAGE\$1CONSUMER](SYS_DATASHARE_USAGE_CONSUMER.md)
   + [SYS\$1DATASHARE\$1USAGE\$1PRODUCER](SYS_DATASHARE_USAGE_PRODUCER.md)

   Use the following system views to see datashare objects and data consumer information for outbound datashares:
   + [SVV\$1DATASHARES](r_SVV_DATASHARES.md)
   + [SVV\$1DATASHARE\$1CONSUMERS](r_SVV_DATASHARE_CONSUMERS.md)
   + [SVV\$1DATASHARE\$1OBJECTS](r_SVV_DATASHARE_OBJECTS.md)

1. The consumer administrators look at the datashares for which they're granted use and review the contents of each datashare by viewing inbound datashares using [SVV\$1DATASHARES](r_SVV_DATASHARES.md). 

   To consume shared data, each consumer administrator creates an Amazon Redshift database from the datashare. The administrator then assigns permissions to appropriate users and roles in the consumer cluster. Users and roles can list the shared objects as part of the standard metadata queries by viewing the following metadata system views and can start querying data immediately.
   + [SVV\$1REDSHIFT\$1COLUMNS](r_SVV_REDSHIFT_COLUMNS.md)
   + [SVV\$1REDSHIFT\$1DATABASES](r_SVV_REDSHIFT_DATABASES.md)
   + [SVV\$1REDSHIFT\$1FUNCTIONS](r_SVV_REDSHIFT_FUNCTIONS.md)
   + [SVV\$1REDSHIFT\$1SCHEMAS](r_SVV_REDSHIFT_SCHEMAS.md)
   + [SVV\$1REDSHIFT\$1TABLES](r_SVV_REDSHIFT_TABLES.md)

   To view objects of both Amazon Redshift local and shared schemas and external schemas, use the following metadata system views to query them.
   + [SVV\$1ALL\$1COLUMNS](r_SVV_ALL_COLUMNS.md)
   + [SVV\$1ALL\$1SCHEMAS](r_SVV_ALL_SCHEMAS.md)
   + [SVV\$1ALL\$1TABLES](r_SVV_ALL_TABLES.md)

When you connect to a consumer database, cross -database discovery is disabled. The metadata system views only return metadata for the shared objects in the datashare associated with the connected database.

## Integrating Amazon Redshift data sharing with AWS CloudTrail


 Data sharing is integrated with AWS CloudTrail. CloudTrail is a service that provides a record of actions taken by a user, a role, or an AWS service in Amazon Redshift. CloudTrail captures all API calls for data sharing as events. The calls captured include calls from the AWS CloudTrail console and code calls to the data sharing operations. For more information about Amazon Redshift integration with AWS CloudTrail, see [Logging with CloudTrail](https://docs.aws.amazon.com/redshift/latest/mgmt/logging-with-cloudtrail.html). 

For more information about CloudTrail, see [How CloudTrail works](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/how-cloudtrail-works.html).