

 Amazon Redshift will no longer support the creation of new Python UDFs starting Patch 198. Existing Python UDFs will continue to function until June 30, 2026. For more information, see the [ blog post ](https://aws.amazon.com/blogs/big-data/amazon-redshift-python-user-defined-functions-will-reach-end-of-support-after-june-30-2026/). 

# Loading data in Amazon Redshift
<a name="t_Loading_data"></a>

There are several ways to load data into an Amazon Redshift database. One popular source of data to load are Amazon S3 files. The following table summarizes some of the methods to use with starting from an Amazon S3 source.

[See the AWS documentation website for more details](http://docs.aws.amazon.com/redshift/latest/dg/t_Loading_data.html)

A COPY command is the most efficient way to load a table. You can also add data to your tables using INSERT commands, though it is much less efficient than using COPY. The COPY command is able to read from multiple data files or multiple data streams simultaneously. Amazon Redshift allocates the workload to the Amazon Redshift nodes and performs the load operations in parallel, including sorting the rows and distributing data across node slices.

**Note**  
Amazon Redshift Spectrum external tables are read-only. You can't COPY or INSERT to an external table.

To access data on other AWS resources, Amazon Redshift must have permission to access those resources and to perform the necessary actions to access the data. You can use AWS Identity and Access Management (IAM) to limit the access users have to Amazon Redshift resources and data.

After your initial data load, if you add, modify, or delete a significant amount of data, you should follow up by running a VACUUM command to reorganize your data and reclaim space after deletes. You should also run an ANALYZE command to update table statistics.

**Topics**
+ [Loading tables with the COPY command](t_Loading_tables_with_the_COPY_command.md)
+ [Create an S3 event integration to automatically copy files from Amazon S3 buckets](loading-data-copy-job.md)
+ [Loading tables with DML commands](t_Updating_tables_with_DML_commands.md)
+ [Performing a deep copy](performing-a-deep-copy.md)
+ [Analyzing tables](t_Analyzing_tables.md)
+ [Vacuuming tables](t_Reclaiming_storage_space202.md)
+ [Managing concurrent write operations](c_Concurrent_writes.md)
+ [Tutorial: Loading data from Amazon S3](tutorial-loading-data.md)