

 Amazon Redshift will no longer support the creation of new Python UDFs starting Patch 198. Existing Python UDFs will continue to function until June 30, 2026. For more information, see the [ blog post ](https://aws.amazon.com/blogs/big-data/amazon-redshift-python-user-defined-functions-will-reach-end-of-support-after-june-30-2026/). 

# Scalar Python UDFs
Scalar Python UDFs

A scalar Python UDF incorporates a Python program that runs when the function is called and returns a single value. The [CREATE FUNCTION](r_CREATE_FUNCTION.md) command defines the following parameters:
+ (Optional) Input arguments. Each argument must have a name and a data type. 
+ One return data type.
+ One executable Python program.

The input and return data types for Python UDFs can be any of the following types:
+  SMALLINT 
+  INTEGER 
+  BIGINT 
+  DECIMAL 
+  REAL 
+  DOUBLE PRECISION 
+  BOOLEAN 
+  CHAR 
+  VARCHAR 
+  DATE 
+  TIMESTAMP 
+  ANYELEMENT 

The aliases for these types are also valid. For a full list of data types and their aliases, see [Data types](c_Supported_data_types.md).

When Python UDFs use the data type ANYELEMENT, Amazon Redshift automatically converts to a standard data type based on the arguments supplied at runtime. For more information, see [ANYELEMENT data type](udf-data-types.md#udf-anyelement-data-type).

When an Amazon Redshift query calls a scalar UDF, the following steps occur at runtime:

1. The function converts the input arguments to Python data types.

   For a mapping of Amazon Redshift data types to Python data types, see [Python UDF data types](udf-data-types.md).

1. The function runs the Python program, passing the converted input arguments.

1. The Python code returns a single value. The data type of the return value must correspond to the RETURNS data type specified by the function definition.

1. The function converts the Python return value to the specified Amazon Redshift data type, then returns that value to the query.

**Note**  
Python 3 isn’t available for Python UDFs. To get Python 3 support for Amazon Redshift UDFs, use [Scalar Lambda UDFs](udf-creating-a-lambda-sql-udf.md) instead.

# Scalar Python UDF example
Example

The following example creates a function that compares two numbers and returns the larger value. Note that the indentation of the code between the double dollar signs (\$1\$1) is a Python requirement. For more information, see [CREATE FUNCTION](r_CREATE_FUNCTION.md).

```
create function f_py_greater (a float, b float)
  returns float
stable
as $$
  if a > b:
    return a
  return b
$$ language plpythonu;
```

The following query calls the new `f_greater` function to query the SALES table and return either COMMISSION or 20 percent of PRICEPAID, whichever is greater.

```
select f_py_greater (commission, pricepaid*0.20) from sales;
```

# Python UDF data types
Python UDF data types

Python UDFs can use any standard Amazon Redshift data type for the input arguments and the function's return value. In addition to the standard data types, UDFs support the data type *ANYELEMENT*, which Amazon Redshift automatically converts to a standard data type based on the arguments supplied at runtime. Scalar UDFs can return a data type of ANYELEMENT. For more information, see [ANYELEMENT data type](#udf-anyelement-data-type).

During execution, Amazon Redshift converts the arguments from Amazon Redshift data types to Python data types for processing. It then converts the return value from the Python data type to the corresponding Amazon Redshift data type. For more information about Amazon Redshift data types, see [Data types](c_Supported_data_types.md).

The following table maps Amazon Redshift data types to Python data types.

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/udf-data-types.html)

## ANYELEMENT data type
ANYELEMENT data type

ANYELEMENT is a *polymorphic data type*. This means that if a function is declared using ANYELEMENT for an argument's data type, the function can accept any standard Amazon Redshift data type as input for that argument when the function is called. The ANYELEMENT argument is set to the data type actually passed to it when the function is called.

If a function uses multiple ANYELEMENT data types, they must all resolve to the same actual data type when the function is called. All ANYELEMENT argument data types are set to the actual data type of the first argument passed to an ANYELEMENT. For example, a function declared as `f_equal(anyelement, anyelement)` will take any two input values, so long as they are of the same data type.

If the return value of a function is declared as ANYELEMENT, at least one input argument must be ANYELEMENT. The actual data type for the return value is the same as the actual data type supplied for the ANYELEMENT input argument. 

# Python language support for UDFs
Python language support

You can create a custom UDF based on the Python programming language. The [Python 2.7 standard library](https://docs.python.org/2/library/index.html) is available for use in UDFs, with the exception of the following modules:
+ ScrolledText
+ Tix
+ Tkinter
+ tk
+ turtle
+ smtpd

In addition to the Python Standard Library, the following modules are part of the Amazon Redshift implementation:
+ [numpy 1.8.2](http://www.numpy.org/)
+ [pandas 0.14.1](https://pandas.pydata.org/)
+ [python-dateutil 2.2](https://dateutil.readthedocs.org/en/latest/)
+ [pytz 2014.7](https://pypi.org/project/pytz/2014.7/)
+ [scipy 0.12.1](https://www.scipy.org/)
+ [six 1.3.0](https://pypi.org/project/six/1.3.0/)
+ [wsgiref 0.1.2](https://pypi.python.org/pypi/wsgiref)

You can also import your own custom Python modules and make them available for use in UDFs by executing a [CREATE LIBRARY](r_CREATE_LIBRARY.md) command. For more information, see [Example: Importing custom Python library modules](udf-importing-custom-python-library-modules.md).

**Important**  
Amazon Redshift blocks all network access and write access to the file system through UDFs.

**Note**  
Python 3 isn’t available for Python UDFs. To get Python 3 support for Amazon Redshift UDFs, use [Scalar Lambda UDFs](udf-creating-a-lambda-sql-udf.md) instead.

# Example: Importing custom Python library modules
Example

You define scalar functions using Python language syntax. You can use the Python Standard Library modules and Amazon Redshift preinstalled modules. You can also create your own custom Python library modules and import the libraries into your clusters, or use existing libraries from Python or third parties. 

You cannot create a library that contains a module with the same name as a Python Standard Library module or an Amazon Redshift preinstalled Python module. If an existing user-installed library uses the same Python package as a library you create, you must drop the existing library before installing the new library. 

You must be a superuser or have `USAGE ON LANGUAGE plpythonu` privilege to install custom libraries; however, any user with sufficient privileges to create functions can use the installed libraries. You can query the [PG\$1LIBRARY](r_PG_LIBRARY.md) system catalog to view information about the libraries installed on your cluster.

## Importing a custom Python module into your cluster


This section provides an example of importing a custom Python module into your cluster. To perform the steps in this section, you must have an Amazon S3 bucket, where you upload the library package. You then install the package in your cluster. For more information about creating buckets, go to [ Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CreatingaBucket.html) in the *Amazon Simple Storage Service User Guide*.

In this example, let's suppose that you create UDFs to work with positions and distances in your data. Connect to your Amazon Redshift cluster from a SQL client tool, and run the following commands to create the functions. 

```
CREATE FUNCTION f_distance (x1 float, y1 float, x2 float, y2 float) RETURNS float IMMUTABLE as $$
    def distance(x1, y1, x2, y2):
        import math
        return math.sqrt((y2 - y1) ** 2 + (x2 - x1) ** 2)
 
    return distance(x1, y1, x2, y2)
$$ LANGUAGE plpythonu;
 
CREATE FUNCTION f_within_range (x1 float, y1 float, x2 float, y2 float) RETURNS bool IMMUTABLE as $$ 
    def distance(x1, y1, x2, y2):
        import math
        return math.sqrt((y2 - y1) ** 2 + (x2 - x1) ** 2)
 
    return distance(x1, y1, x2, y2) < 20
$$ LANGUAGE plpythonu;
```

Note that a few lines of code are duplicated in the previous functions. This duplication is necessary because a UDF cannot reference the contents of another UDF, and both functions require the same functionality. However, instead of duplicating code in multiple functions, you can create a custom library and configure your functions to use it. 

To do so, first create the library package by following these steps: 

1. Create a folder named **geometry**. This folder is the top level package of the library.

1. In the **geometry** folder, create a file named `__init__.py`. Note that the file name contains two double underscore characters. This file indicates to Python that the package can be initialized.

1. Also in the **geometry** folder, create a folder named **trig**. This folder is the subpackage of the library.

1. In the **trig** folder, create another file named `__init__.py` and a file named `line.py`. In this folder, `__init__.py` indicates to Python that the subpackage can be initialized and that `line.py` is the file that contains library code.

   Your folder and file structure should be the same as the following: 

   ```
   geometry/
      __init__.py
      trig/
         __init__.py
         line.py
   ```

    For more information about package structure, go to [ Modules](https://docs.python.org/2/tutorial/modules.html) in the Python tutorial on the Python website. 

1.  The following code contains a class and member functions for the library. Copy and paste it into `line.py`. 

   ```
   class LineSegment:
     def __init__(self, x1, y1, x2, y2):
       self.x1 = x1
       self.y1 = y1
       self.x2 = x2
       self.y2 = y2
     def angle(self):
       import math
       return math.atan2(self.y2 - self.y1, self.x2 - self.x1)
     def distance(self):
       import math
       return math.sqrt((self.y2 - self.y1) ** 2 + (self.x2 - self.x1) ** 2)
   ```

 After you have created the package, do the following to prepare the package and upload it to Amazon S3. 

1. Compress the contents of the **geometry** folder into a .zip file named **geometry.zip**. Do not include the **geometry** folder itself; only include the contents of the folder as shown following: 

   ```
   geometry.zip
      __init__.py
      trig/
         __init__.py
         line.py
   ```

1. Upload **geometry.zip** to your Amazon S3 bucket.
**Important**  
 If the Amazon S3 bucket does not reside in the same region as your Amazon Redshift cluster, you must use the REGION option to specify the region in which the data is located. For more information, see [CREATE LIBRARY](r_CREATE_LIBRARY.md).

1.  From your SQL client tool, run the following command to install the library. Replace *<bucket\$1name>* with the name of your bucket, and replace *<access key id>* and *<secret key>* with an access key and secret access key from your AWS Identity and Access Management (IAM) user credentials. 

   ```
   CREATE LIBRARY geometry LANGUAGE plpythonu FROM 's3://<bucket_name>/geometry.zip' CREDENTIALS 'aws_access_key_id=<access key id>;aws_secret_access_key=<secret key>';
   ```

 After you install the library in your cluster, you need to configure your functions to use the library. To do this, run the following commands. 

```
CREATE OR REPLACE FUNCTION f_distance (x1 float, y1 float, x2 float, y2 float) RETURNS float IMMUTABLE as $$ 
    from trig.line import LineSegment
 
    return LineSegment(x1, y1, x2, y2).distance()
$$ LANGUAGE plpythonu;
 
CREATE OR REPLACE FUNCTION f_within_range (x1 float, y1 float, x2 float, y2 float) RETURNS bool IMMUTABLE as $$ 
    from trig.line import LineSegment
 
    return LineSegment(x1, y1, x2, y2).distance() < 20
$$ LANGUAGE plpythonu;
```

In the preceding commands, `import trig/line` eliminates the duplicated code from the original functions in this section. You can reuse the functionality provided by this library in multiple UDFs. Note that to import the module, you only need to specify the path to the subpackage and module name (`trig/line`). 

# Python UDF constraints
Constraints

Within the constraints listed in this topic, you can use UDFs anywhere you use the Amazon Redshift built-in scalar functions. For more information, see [SQL functions reference](c_SQL_functions.md).

Amazon Redshift Python UDFs have the following constraints:
+ Python UDFs cannot access the network or read or write to the file system.
+ The total size of user-installed Python libraries cannot exceed 100 MB.
+ Amazon Redshift can only run one Python UDF at a time for provisioned clusters using automatic workload management (WLM) and for serverless workgroups. If you try to run more than one UDF concurrently, Amazon Redshift queues the remaining Python UDFs to run in the workload management queues. SQL UDFs don’t have a concurrency limit when using automatic WLM. 
+  When using manual WLM for provisioned clusters, the number of Python UDFs that can run concurrently per cluster is limited to one-fourth of the cluster’s total concurrency level. For example, a provisioned cluster with a concurrency of 15 can run a maximum of three concurrent Python UDFs. 
+ When using Python UDFs, Amazon Redshift doesn't support the SUPER and HLLSKETCH data types.

# Logging errors and warnings in Python UDFs
Logging errors and warnings

You can use the Python logging module to create user-defined error and warning messages in your UDFs. Following query execution, you can query the [SVL\$1UDF\$1LOG](r_SVL_UDF_LOG.md) system view to retrieve logged messages.

**Note**  
UDF logging consumes cluster resources and might affect system performance. We recommend implementing logging only for development and troubleshooting. 

During query execution, the log handler writes messages to the SVL\$1UDF\$1LOG system view, along with the corresponding function name, node, and slice. The log handler writes one row to the SVL\$1UDF\$1LOG per message, per slice. Messages are truncated to 4096 bytes. The UDF log is limited to 500 rows per slice. When the log is full, the log handler discards older messages and adds a warning message to SVL\$1UDF\$1LOG.

**Note**  
The Amazon Redshift UDF log handler escapes newlines ( `\n` ), pipe ( `|` ) characters, and backslash ( `\` ) characters with a backslash ( `\` ) character.

By default, the UDF log level is set to WARNING. Messages with a log level of WARNING, ERROR, and CRITICAL are logged. Messages with lower severity INFO, DEBUG, and NOTSET are ignored. To set the UDF log level, use the Python logger method. For example, the following sets the log level to INFO.

```
logger.setLevel(logging.INFO)
```

For more information about using the Python logging module, see [Logging facility for Python](https://docs.python.org/2.7/library/logging.html) in the Python documentation.

The following example creates a function named f\$1pyerror that imports the Python logging module, instantiates the logger, and logs an error.

```
CREATE OR REPLACE FUNCTION f_pyerror() 
RETURNS INTEGER
VOLATILE AS
$$
import logging

logger = logging.getLogger()
logger.setLevel(logging.INFO)
logger.info('Your info message here') 
return 0
$$ language plpythonu;
```

The following example queries SVL\$1UDF\$1LOG to view the message logged in the previous example.

```
select funcname, node, slice, trim(message) as message 
from svl_udf_log;

  funcname  | query | node | slice |   message  
------------+-------+------+-------+------------------
  f_pyerror | 12345 |     1|     1 | Your info message here
```