Amazon Redshift will no longer support the creation of new Python UDFs starting November 1, 2025.
If you would like to use Python UDFs, create the UDFs prior to that date.
Existing Python UDFs will continue to function as normal. For more information, see the
blog post
Loading data into a database
You can use query editor v2 to load data into a database in an Amazon Redshift cluster or workgroup. This section covers how to load sample data, data from S3, and data from a local file setup and workflow.
Sample data
The query editor v2 comes with sample data and notebooks available to be loaded into a sample database and corresponding schema.
To load sample data, choose the
icon associated with the sample data you want to load. The query editor v2
then loads the data into a schema in database sample_data_dev and creates a
folder of saved notebooks.
The following sample datasets are available.
- tickit
-
Most of the examples in the Amazon Redshift documentation use sample data called
tickit. This data consists of seven tables: two fact tables and five dimensions. When you load this data, the schematickitis updated with sample data. For more information about thetickitdata, see Sample database in the Amazon Redshift Database Developer Guide. - tpch
-
This data is used for a decision support benchmark. When you load this data, the schema
tpchis updated with sample data. For more information about thetpchdata, see TPC-H. - tpcds
-
This data is used for a decision support benchmark. When you load this data, the schema
tpcdsis updated with sample data. For more information about thetpcdsdata, see TPC-DS.