

# Adding outputs to a MediaConnect flow
Adding outputsZixi pull outputs

You can now add outputs that use the Zixi pull protocol.

For transport stream flows, you can add up to 50 outputs. However, for optimal performance, follow the guidance offered in [Best practices](best-practices.md). Every output must have a name, a [protocol](protocols.md), an IP address, and a port.

**Note**  
If you intend to set up an entitlement for an output, don't create the output. Instead, [grant an entitlement](entitlements-grant.md). When the subscriber creates a flow using your content as the source, the service creates an output on your flow.

The method you use to add an output to a flow is dependent on the type of output that you want to add:
+ [Standard output (transport stream flow)](outputs-add-standard.md) – Sends compressed content to any destination that is not a virtual private cloud (VPC) that you configured using Amazon Virtual Private Cloud.
+ [VPC output (transport stream flow)](outputs-add-vpc.md) – Sends compressed content to a VPC that you configured using Amazon Virtual Private Cloud.
+ [NDI® output (transport stream flow)](outputs-add-ndi.md) – Sends high-quality, low-latency content over IP networks so that it can be received by the production systems within your VPC network.
+ [VPC output (CDI flow)](outputs-add-vpc.md) – Sends uncompressed content to a VPC that you configured using Amazon Virtual Private Cloud.

# Adding standard outputs to a MediaConnect flow
Adding standard outputs

For transport stream flows, you can add up to 50 outputs. However, for optimal performance, follow the guidance offered in [Best practices](best-practices.md). A standard output goes to any destination that is not part of a virtual private cloud (VPC) that you created using Amazon Virtual Private Cloud.

**Note**  
CDI flows don't support standard outputs.

**To add a standard output to a flow (console)**

1. Open the MediaConnect console at [https://console.aws.amazon.com/mediaconnect/](https://console.aws.amazon.com/mediaconnect/).

1. On the **Flows** page, choose the name of the flow that you want to add an output to.

   The details page for that flow appears. 

1. Choose the **Outputs** tab.

1. Choose **Add output**.

1. For **Name**, specify a name for your output. This value is an identifier that is visible only on the AWS Elemental MediaConnect console and is not visible to the end user.

1. For **Output type**, choose **Standard output**.

1. For **Description**, enter a description that will remind you later where this output is going. This might be the company name or notes about the setup.

1. Determine which protocol you want to use for the output.

1. For specific instructions based on the protocol that you want to use, choose one of the following tabs:

------
#### [ RIST ]

   1. For **Protocol**, choose **RIST**. 

   1. For **IP address**, choose the IP address where you want to send the output.

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).
**Note**  
The RIST protocol requires one additional port for error correction. To accommodate this requirement, AWS Elemental MediaConnect reserves the port that is \$11 from the port that you specify. For example, if you specify port 4000 for the output, the service assigns ports 4000 and 4001.

   1. For **Smoothing latency**, specify the additional delay that you want to use with output smoothing. We recommend that you specify a value of 0 ms to disable smoothing. However, if the receiver can't process the stream properly, specify a value between 100 and 1,000 ms. This way, AWS Elemental MediaConnect attempts to correct jitter from the flow source. If you keep this field blank, the service uses the default value of 0 ms.

------
#### [ RTP or RTP-FEC ]

   1. For **Protocol**, choose **RTP** or **RTP-FEC**. 

   1. For **IP address**, choose the IP address where you want to send the output.

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).
**Note**  
The RTP-FEC protocol requires two additional ports for error correction. To accommodate this requirement, AWS Elemental MediaConnect reserves the ports that are \$12 and \$14 from the port that you specify. For example, if you specify port 4000 for the output, the service assigns ports 4000, 4002, and 4004. 

   1. For **Smoothing latency**, specify the additional delay that you want to use with output smoothing. We recommend that you specify a value of 0 ms to disable smoothing. However, if the receiver can't process the stream properly, specify a value between 100 and 1,000 ms. This way, AWS Elemental MediaConnect attempts to correct jitter from the flow source. If you keep this field blank, the service uses the default value of 0 ms.

------
#### [ SRT listener ]

   1. For **Name**, specify a name for your source. This value is an identifier that is visible only on the MediaConnect console. It is not visible to anyone outside of the current AWS account.

   1. For **Protocol**, choose **SRT listener**. 

   1. For **Minimum latency**, specify the minimum size of the buffer (delay) that you want the service to maintain. A higher latency value means a longer delay in transmitting the stream, but more room for error correction. A lower latency value means a shorter delay, but less room for error correction. You can choose a value from 10–15,000 ms. If you keep this field blank, MediaConnect uses the default value of 2,000 ms.

      The SRT protocol uses a **minimum latency** configuration on each side of the connection. The larger of these two values is used as the *recovery latency*. If the transmitted bitrate, multiplied by the recovery latency, is higher than the *receiver buffer*, the buffer will overflow and the stream can fail with a `Buffer Overflow Error`. On the SRT receiver side, the receiver buffer is configured by the SRTO\$1RCVBUF value. The size of the receiver buffer is limited by the *flow control window size* (SRTO\$1FC) value. On the MediaConnect side, the receiver buffer is calculated as the **maximum bitrate** value multiplied by the **minimum latency** value. For more information about the SRT buffer, see [the SRT Configuration Guidelines.](https://github.com/Haivision/srt/blob/master/docs/API/configuration-guidelines.md)

       

   1. For **CIDR allow list**, specify a range of IP addresses that are allowed to view content from your output. Format the IP addresses as a Classless Inter-Domain Routing (CIDR) block, for example, 10.24.34.0/23. For more information about CIDR notation, see [RFC 4632](https://tools.ietf.org/html/rfc4632).
**Important**  
Specify a CIDR block that is as precise as possible. Include only the IP addresses that you want to contribute content to your flow. If you specify a CIDR block that is too wide, it allows for the possibility of outside parties sending content to your flow.
**Tip**  
To specify an additional CIDR block, choose **Add**. You can specify up to three CIDR blocks.

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).

   1. If you want to encrypt the video as it is sent to this output, do the following:

      1. In the **Encryption** section, choose **Enable**.

      1. **Encryption type** will not be selectable. **srt-password** is the only available encryption for this protocol.

      1. For **Role ARN**, specify the ARN of the role that you created when you [set up encryption](encryption-static-key-set-up.md#encryption-static-key-set-up-create-iam-role).

      1. For **Secret ARN**, specify the ARN that AWS Secrets Manager assigned when you [created the secret to store the SRT password](encryption-srt-password-set-up.md#encryption-srt-password-set-up-password).

------
#### [ SRT caller ]

   1. For **Protocol**, choose **SRT-caller**. 

   1. For **Minimum latency**, specify the minimum size of the buffer (delay) that you want the service to maintain. A higher latency value means a longer delay in transmitting the stream, but more room for error correction. A lower latency value means a shorter delay, but less room for error correction. You can choose a value from 10–15,000 ms. If you keep this field blank, MediaConnect uses the default value of 2,000 ms. 

      The SRT protocol uses a **minimum latency** configuration on each side of the connection. The larger of these two values is used as the *recovery latency*. If the transmitted bitrate, multiplied by the recovery latency, is higher than the *receiver buffer*, the buffer will overflow and the stream can fail with a `Buffer Overflow Error`. On the SRT receiver side, the receiver buffer is configured by the SRTO\$1RCVBUF value. The size of the receiver buffer is limited by the *flow control window size* (SRTO\$1FC) value. On the MediaConnect side, the receiver buffer is calculated as the **maximum bitrate** value multiplied by the **minimum latency** value. For more information about the SRT buffer, see [the SRT Configuration Guidelines.](https://github.com/Haivision/srt/blob/master/docs/API/configuration-guidelines.md)

   1. For **Destination IP address**, enter the IP address or domain of the output's destination.

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).

   1. If you want to encrypt the video as it is sent to this output, do the following:

      1. In the **Encryption** section, choose **Enable**.

      1. **Encryption type** will not be selectable. **Srt-password** is the only available encryption for this protocol.

      1. For **Role ARN**, specify the ARN of the role that you created when you [set up encryption](encryption-static-key-set-up.md#encryption-static-key-set-up-create-iam-role).

      1. For **Secret ARN**, specify the ARN that AWS Secrets Manager assigned when you [created the secret to store the SRT password](encryption-srt-password-set-up.md#encryption-srt-password-set-up-password).

------
#### [ Zixi pull ]

   1. For **Protocol**, choose **Zixi pull**. 

   1. For **Stream ID**, enter the **Stream** value that was configured when you added the input on the Zixi receiver. In the Zixi receiver, this value is found in the **Stream parameters** section.
**Important**  
If you keep this field blank, the service uses the output name as the stream ID. Because the stream ID must match the value that is set in the Zixi receiver, you must specify the stream ID if it is not exactly the same as the output name.

   1. For **Remote ID**, enter the **ID** value that is assigned to the Zixi receiver. In the Zixi receiver, this value is located in the **General** settings menu and is labelled **ID**. The **ID** value can also be found on the Zixi receiver **Status** page.

   1. For **Maximum latency**, specify the size of the buffer (delay) that you want the service to maintain. A higher latency value means a longer delay in transmitting the stream, but more room for error correction. A lower latency value means a shorter delay, but less room for error correction. You can choose a value between 0 and 60,000 ms. If you keep this field blank, the service uses the latency that is set in the receiver.

   1. For **CIDR allow list**, specify a range of IP addresses that are allowed to retrieve content from your source. Format the IP addresses as a Classless Inter-Domain Routing (CIDR) block, for example, 10.24.34.0/23. For more information about CIDR notation, see [RFC 4632](https://tools.ietf.org/html/rfc4632).
**Tip**  
To specify an additional CIDR block, choose **Add**. You can specify up to three CIDR blocks.

   1. If you want to encrypt the video as it is sent to this output, do the following:

      1. In the **Encryption** section, choose **Enable**.

      1. For **Encryption type**, choose **Static key**.

      1. For **Role ARN**, specify the ARN of the role that you created when you [set up encryption](encryption-static-key-set-up.md#encryption-static-key-set-up-create-iam-role).

      1. For **Secret ARN**, specify the ARN that AWS Secrets Manager assigned when you [created the secret to store the encryption key](encryption-static-key-set-up.md#encryption-static-key-set-up-store-key).

      1. For **Encryption algorithm**, choose the type of encryption that you want to use to encrypt the source.

------
#### [ Zixi push ]

   1. For **Protocol**, choose **Zixi push**. 

   1. For **IP address**, choose the IP address where you want to send the output.

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).

   1. For **Stream ID**, enter the stream ID that is set in the Zixi receiver.
**Important**  
If you keep this field blank, the service uses the output name as the stream ID. Because the stream ID must match the value set in the Zixi receiver, you must specify the stream ID if it is not exactly the same as the output name.

   1. For **Maximum latency**, specify the size of the buffer (delay) that you want the service to maintain. A higher latency value means a longer delay in transmitting the stream, but more room for error correction. A lower latency value means a shorter delay, but less room for error correction. You can choose a value between 0 and 60,000 ms. If you keep this field blank, the service uses the default value of 6,000 ms.

   1. If you want to encrypt the video as it is sent to this output, do the following:

      1. In the **Encryption** section, choose **Enable**.

      1. For **Encryption type**, choose **Static key**.

      1. For **Role ARN**, specify the ARN of the role that you created when you [set up encryption](encryption-static-key-set-up.md#encryption-static-key-set-up-create-iam-role).

      1. For **Secret ARN**, specify the ARN that AWS Secrets Manager assigned when you [created the secret to store the encryption key](encryption-static-key-set-up.md#encryption-static-key-set-up-store-key).

      1. For **Encryption algorithm**, choose the type of encryption that you want to use to encrypt the source.

------

1. Choose **Add output**.

**To add an output to a flow (AWS CLI)**

1. Create a JSON file that contains the details of the output that you want to add to the flow.

   The following example shows the structure for the contents of the file:

   ```
   {
       "FlowArn": "arn:aws:mediaconnect:us-east-1:111122223333:flow:1-23aBC45dEF67hiJ8-12AbC34DE5fG:BasketballGame",
       "Outputs": [
           {
               "Description": "RTP-FEC Output",
               "Destination": "192.0.2.12",
               "Name": "RTPOutput",
               "Port": 5020,
               "Protocol": "rtp-fec",
               "SmoothingLatency": 100
           }
       ]
   }
   ```

1. In the AWS CLI, use the `add-flow-output` command:

   ```
   aws mediaconnect add-flow-outputs --flow-arn "arn:aws:mediaconnect:us-east-1:111122223333:flow:1-23aBC45dEF67hiJ8-12AbC34DE5fG:BasketballGame" --cli-input-json file://addFlowOutput.txt --region us-west-2
   ```

   The following example shows the return value:

   ```
   {
       "FlowArn": "arn:aws:mediaconnect:us-east-1:111122223333:flow:1-23aBC45dEF67hiJ8-12AbC34DE5fG:BasketballGame",
       "Outputs": [
           {
               "Name": "RTPOutput",
               "Port": 5020,
               "Transport": {
                   "SmoothingLatency": 100,
                   "Protocol": "rtp-fec"
               },
               "Destination": "192.0.2.12",
               "OutputArn": "arn:aws:mediaconnect:us-east-1:111122223333:output:2-3aBC45dEF67hiJ89-c34de5fG678h:RTPOutput",
               "Description": "RTP-FEC Output"
           }
       ]
   }
   ```

# Adding VPC outputs to a flow
Adding VPC outputsVPC outputs

You can now add an output to send content from your AWS Elemental MediaConnect flow to your VPC without going over the public internet.

A VPC output goes to a virtual private cloud (VPC) that you created using Amazon Virtual Private Cloud.

For transport stream flows, you can add outputs (up to 50) even if the flow is active. For CDI flows, you can add outputs (up to 10) only if the flow is in standby mode. For optimal performance, follow the guidance offered in [Best practices](best-practices.md). 

**To add a VPC output to a flow (console)**

1. Open the MediaConnect console at [https://console.aws.amazon.com/mediaconnect/](https://console.aws.amazon.com/mediaconnect/).

1. On the **Flows** page, choose the name of the flow that you want to add an output to.

   The details page for that flow appears. 

1. Choose the **Outputs** tab.

1. Choose **Add output**.

1. For **Name**, specify a name for your output. This value is an identifier that is visible only on the AWS Elemental MediaConnect console and is not visible to the end user.

1. For **Output type**, choose **VPC output**.

1. For **Protocol**, choose the appropriate protocol.

1. For **Description**, enter a description that will remind you later where this output is going. This might be the company name or notes about the setup.

1. Determine which protocol you want to use for the output. The protocol options are dependent on the flow type.
   + For transport stream flows, the protocol options are: RTP, RTP-FEC, RIST, SRT, and Zixi.
   + For CDI flows, the protocol options are: CDI and ST 2110 JPEG XS.

1. For specific instructions based on the protocol that you want to use, choose one of the following tabs:

------
#### [ RIST ]

   1. For **Protocol**, choose **RIST**. 

   1. For **IP address**, choose the IP address where you want to send the output.

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).
**Note**  
The RIST protocol requires one additional port for error correction. To accommodate this requirement, AWS Elemental MediaConnect reserves the port that is \$11 from the port that you specify. For example, if you specify port 4000 for the output, the service assigns ports 4000 and 4001.

   1. For **Smoothing latency**, specify the additional delay that you want to use with output smoothing. We recommend that you specify a value of 0 ms to disable smoothing. However, if the receiver can't process the stream properly, specify a value between 100 and 1,000 ms. This way, AWS Elemental MediaConnect attempts to correct jitter from the flow source. If you keep this field blank, the service uses the default value of 0 ms.

   1. For **Output to VPC**, choose the name of the VPC interface that you want to send your output to.

------
#### [ RTP or RTP-FEC ]

   1. For **Protocol**, choose **RTP** or **RTP-FEC**. 
**Note**  
RTP and RTP-FEC outputs are compliant with the SMPTE 2022-7 standard. If your downstream receiver supports 2022-7 source merging, RTP and RTP-FEC outputs will be compatible.

   1. For **IP address**, choose the IP address where you want to send the output.

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).
**Note**  
The RTP-FEC protocol requires two additional ports for error correction. To accommodate this requirement, AWS Elemental MediaConnect reserves the ports that are \$12 and \$14 from the port that you specify. For example, if you specify port 4000 for the output, the service assigns ports 4000, 4002, and 4004. 

   1. For **Smoothing latency**, specify the additional delay that you want to use with output smoothing. We recommend that you specify a value of 0 ms to disable smoothing. However, if the receiver can't process the stream properly, specify a value between 100 and 1,000 ms. This way, AWS Elemental MediaConnect attempts to correct jitter from the flow source. If you keep this field blank, the service uses the default value of 0 ms.

   1. For **Output to VPC**, choose the name of the VPC interface that you want to send your output to.

------
#### [ SRT listener ]

   1. For **Name**, specify a name for your source. This value is an identifier that is visible only on the MediaConnect console. It is not visible to anyone outside of the current AWS account.

   1. For **Output type**, select **VPC output**.

   1. For **Protocol**, choose **SRT listener**. 

   1. For **Description**, enter a description that can help you distinguish one output from another. This might be the company name or notes about the setup.

   1. For **Minimum latency**, specify the minimum size of the buffer (delay) that you want the service to maintain. A higher latency value means a longer delay in transmitting the stream, but more room for error correction. A lower latency value means a shorter delay, but less room for error correction. You can choose a value from 10–15,000 ms. If you keep this field blank, MediaConnect uses the default value of 2,000 ms. 

      The SRT protocol uses a **minimum latency** configuration on each side of the connection. The larger of these two values is used as the *recovery latency*. If the transmitted bitrate, multiplied by the recovery latency, is higher than the *receiver buffer*, the buffer will overflow and the stream can fail with a `Buffer Overflow Error`. On the SRT receiver side, the receiver buffer is configured by the SRTO\$1RCVBUF value. The size of the receiver buffer is limited by the *flow control window size* (SRTO\$1FC) value. On the MediaConnect side, the receiver buffer is calculated as the **maximum bitrate** value multiplied by the **minimum latency** value. For more information about the SRT buffer, see [the SRT Configuration Guidelines.](https://github.com/Haivision/srt/blob/master/docs/API/configuration-guidelines.md)

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).

   1. For **Output to VPC**, choose the name of the VPC interface that you want to send your output to.

   1. If you want to encrypt the video as it is sent to this output, do the following:

      1. In the **Encryption** section, choose **Enable**.

      1. For **Role ARN**, specify the ARN of the role that you created when you [set up encryption](encryption-static-key-set-up.md#encryption-static-key-set-up-create-iam-role).

      1. For **Secret ARN**, specify the ARN that AWS Secrets Manager assigned when you [created the secret to store the SRT password](encryption-srt-password-set-up.md#encryption-srt-password-set-up-password).

------
#### [ SRT caller ]

   1. For **Name**, specify a name for your source. This value is an identifier that is visible only on the MediaConnect console. It is not visible to anyone outside of the current AWS account.

   1. For **Output type**, select **VPC output**.

   1. For **Protocol**, choose **SRT caller**.

   1. For **Description**, enter a description that can help you distinguish one output from another. This might be the company name or notes about the setup.

   1. For **Minimum latency**, specify the minimum size of the buffer (delay) that you want the service to maintain. A higher latency value means a longer delay in transmitting the stream, but more room for error correction. A lower latency value means a shorter delay, but less room for error correction. You can choose a value from 10–15,000 ms. If you keep this field blank, MediaConnect uses the default value of 2,000 ms. 

      The SRT protocol uses a **minimum latency** configuration on each side of the connection. The larger of these two values is used as the *recovery latency*. If the transmitted bitrate, multiplied by the recovery latency, is higher than the *receiver buffer*, the buffer will overflow and the stream can fail with a `Buffer Overflow Error`. On the SRT receiver side, the receiver buffer is configured by the SRTO\$1RCVBUF value. The size of the receiver buffer is limited by the *flow control window size* (SRTO\$1FC) value. On the MediaConnect side, the receiver buffer is calculated as the **maximum bitrate** value multiplied by the **minimum latency** value. For more information about the SRT buffer, see [the SRT Configuration Guidelines.](https://github.com/Haivision/srt/blob/master/docs/API/configuration-guidelines.md)

   1. For **Destination IP address**, enter the IP address or domain of the output's destination.

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).

   1. For **Output to VPC**, choose the name of the VPC interface that you want to send your output to.

   1. If you want to encrypt the video as it is sent to this output, do the following:

      1. In the **Encryption** section, choose **Enable**.

      1. **Encryption type** will not be selectable. **Srt-password** is the only available encryption for this protocol.

      1. For **Role ARN**, specify the ARN of the role that you created when you [set up encryption](encryption-static-key-set-up.md#encryption-static-key-set-up-create-iam-role).

      1. For **Secret ARN**, specify the ARN that AWS Secrets Manager assigned when you [created the secret to store the SRT password](encryption-srt-password-set-up.md#encryption-srt-password-set-up-password).

------
#### [ Zixi push ]

   1. For **Protocol**, choose **Zixi push**. 

   1. For **IP address**, choose the IP address where you want to send the output.

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).

   1. For **Stream ID**, enter the stream ID that is set in the Zixi receiver.
**Important**  
If you keep this field blank, the service uses the output name as the stream ID. Because the stream ID must match the value set in the Zixi receiver, you must specify the stream ID if it is not exactly the same as the output name.

   1. For **Maximum latency**, specify the size of the buffer (delay) that you want the service to maintain. A higher latency value means a longer delay in transmitting the stream, but more room for error correction. A lower latency value means a shorter delay, but less room for error correction. You can choose a value between 0 and 60,000 ms. If you keep this field blank, the service uses the default value of 6,000 ms.

   1. For **Output to VPC**, choose the name of the VPC interface that you want to send your output to.

   1. If you want to encrypt the video as it is sent to this output, do the following:

      1. In the **Encryption** section, choose **Enable**.

      1. For **Encryption type**, choose **Static key**.

      1. For **Role ARN**, specify the ARN of the role that you created when you [set up encryption](encryption-static-key-set-up.md#encryption-static-key-set-up-create-iam-role).

      1. For **Secret ARN**, specify the ARN that AWS Secrets Manager assigned when you [created the secret to store the encryption key](encryption-static-key-set-up.md#encryption-static-key-set-up-store-key).

      1. For **Encryption algorithm**, choose the type of encryption that you want to use to encrypt the source.

------
#### [ CDI ]

   1. For **Protocol**, choose **CDI**. 

   1. For **IP address**, choose the IP address where you want to send the output.

   1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).

   1. For **VPC interface**, choose the name of the VPC interface that you want to send your output to.

   1. For each media stream that you want to send as part of the output, do the following:

      1. For **Media stream name**, choose the name of the media stream. You can only add the media streams that the source on your flow uses.

      1. For **Encoding name**, confirm the default value, which is pre-selected based on the media stream type.

      1. For **FMT**, specify the format type number (sometimes referred to as *RTP payload type*) of the media stream. This value should be in a format that the receiver recognizes.

------
#### [ ST 2110 JPEG XS ]

   1. For **Protocol**, choose **ST 2110 JPEG XS**. 

   1. For **VPC interface 1**, choose one of the VPC interfaces that you want to send content to and then choose the specific IP address where you want to send the output.

   1. For **VPC interface 2**, choose a second VPC interface that you want to send content to and then choose the specific IP address where you want to send the output. There is no priority between VPC interfaces 1 and 2.

   1. For each media stream that you want to send as part of the output, do the following:

      1. For **Media stream name**, choose the name of the media stream. You can only add the media streams that the source on your flow uses.

      1. For **Encoding name**, choose the format that was used to encode the data.
         + For ancillary data streams, set the encoding name to **smpte291**.
         + For audio streams, set the encoding name to **pcm**.
         + For video, set the encoding name to **jxsv**.

      1. For **Port**, choose the port that you want to use when the content is distributed to this output. For more information about ports, see [Output destinations](destinations.md).

      1. For **Encoder profile**, choose a setting for the compression. This property only applies if the source uses the CDI protocol. 

      1. For **Compression factor**, specify a value that you want the service to use when calculating the compression for the output. Valid values are floating point numbers in the range of 3.0 to 10.0, inclusive The bitrate of the output is calculated as follows:

         Output bitrate = (1 / compressionFactor) \$1 (source bitrate)

         This property only applies if the source uses the CDI protocol.

   1. Choose **Add output**.

------

# Adding an NDI® output to a MediaConnect flow
Adding NDI outputs

This procedure walks you through the process of setting up an NDI® output and configuring how your NDI video streams appear to other devices in your VPC network. After you have the necessary prerequisites in place, you can add an NDI output to your MediaConnect flow, allowing you to distribute your video and audio streams over the NDI protocol within your VPC.

**Note**  
CDI flows don't support NDI outputs.

## Prerequisites


We recommend reviewing the [NDI outputs](outputs-using-ndi.md) documentation to familiarize yourself with this feature before getting started.

Before you can add NDI outputs to a flow, make sure you have the following resources in place:

**Large MediaConnect flow with NDI configuration enabled**  
+ If you haven't created a flow yet, you'll need to [create a transport stream flow](https://docs.aws.amazon.com/mediaconnect/latest/ug/flows-create.html). When you create the flow, you must set the size to large and make sure that NDI support is enabled. 
+ The flow can be in ACTIVE or STANDBY status before you add an NDI output.

**Network infrastructure**  
+ **VPC** - You'll need a Virtual Private Cloud (VPC). For a quick start, you can use the [AWS CloudFormation VPC template](https://docs.aws.amazon.com/vpc/latest/userguide/create-vpc.html) to automatically create a VPC with public and private subnets. For more information about VPCs, see the [Amazon VPC User Guide](https://docs.aws.amazon.com/vpc/latest/userguide/). 
+ **Discovery servers** - NDI discovery servers must already be provisioned in your VPC network. MediaConnect connects to these servers, but it doesn't create them for you. AWS provides guidance for automatically deploying NDI discovery servers using AWS CloudFormation, including best practices for installation and configuration. For instructions, see [Setting Up NDI Discovery Servers for Broadcast Workflows](https://aws.amazon.com/solutions/guidance/programmatic-deployment-of-ndi-discovery-servers-for-broadcast-workflows-on-aws/).
+ **Security groups** - To enable NDI functionality, we recommend that you configure your security groups with a self-referencing ingress rule and egress rule. You can then attach this security group to the EC2 instances where your NDI servers are running within the VPC. This approach automatically allows all necessary NDI communication between components in your VPC, and all required network traffic is permitted. For guidance on setting up self-referencing security group rules, see [Security Group Referencing](https://docs.aws.amazon.com/vpc/latest/userguide/security-group-rules.html#security-group-referencing) in the Amazon VPC User Guide.
+ In the following procedure, you’ll need to know your NDI server private IP address and your VPC subnet ID.

## Procedure


Follow these steps to set up an NDI output and configure how your NDI video and audio streams appear to other devices in your VPC network.

**To add an NDI output to a flow (console)**

1. Open the MediaConnect console at [https://console.aws.amazon.com/mediaconnect/](https://console.aws.amazon.com/mediaconnect/).

1. On the **Flows** page, choose the name of the flow that you want to add an output to.

1. On the flow details page under **Flow size**, make sure the size is set to **Large**. 

1. On the flow details page under **NDI configuration**, configure your settings as follows:

   1. Set **Flow NDI support** to **Enabled** if it’s not already.

   1. (Optional) Enter an **NDI machine name**.
   + This name is used as a prefix to help you identify the NDI sources that your flow creates. For example, if you enter **MACHINENAME**, your NDI sources will appear as **MACHINENAME** `(ProgramName)`.
   + If you leave this blank, MediaConnect generates a unique 12-character ID as the prefix. This ID is derived from the flow's Amazon Resource Name (ARN), so the machine name references the flow resource.
**Tip**  
Thoughtful naming is especially important when you have multiple flows creating NDI sources. For example, a production environment with 100 NDI sources would benefit from clear, descriptive machine name prefixes like `STUDIO-A`, `STUDIO-B`, `NEWSROOM`, and so on. 

    c. Add up to three **NDI discovery servers**. For each server, provide the following information:
   + Enter the private IP address that's resolvable within the VPC subnet where the VPC adapter is pointed. This should be a private IP, not a public IP address.
   + Select the VPC interface adapter to control network access.
   + (Optional) Specify a port number. If you leave this blank, MediaConnect uses the NDI discovery server default of TCP-5959.
**Note**  
DNS names aren't currently supported for discovery servers.
**Tip**  
You can add up to three discovery servers. Having multiple discovery servers improves reliability and helps ensure your NDI sources are discoverable across your network.

1. Choose the **Outputs** tab.

1. Choose **Add output**.

1. For **Name**, specify a name for your output. This value is an identifier that is visible only on the AWS Elemental MediaConnect console and is not visible to the end user.

1. For **Output type**, choose **NDI output**.

1. For **NDI codec**, choose **SpeedHQ**.

1. For **NDI SpeedHQ quality**, enter a value between 100-200. 
   + This setting adjusts the NDI encoder's target bitrate as a percentage of the default. 
   + The default value is 100, which uses the standard NDI bitrate. Values up to 200 increase the target bitrate proportionally (for example, 200 doubles it).
**Note**  
Some kinds of content (such as high-motion sports) will benefit from a higher quality setting. However, keep in mind that using higher quality settings limits the total number of outputs that a flow can generate (up to 2.5 Gbps).

1. (Optional) Enter an **NDI program name**.
   + This name is used as a suffix to help you identify the NDI sources that your flow creates. For example, if you enter **MyNDIProgram**, your NDI sources will appear as `MACHINENAME` **(MyNDIProgram)**.
   + If you leave this blank, MediaConnect uses the name of the output.
**Tip**  
Thoughtful naming is especially important when you have multiple flows creating NDI sources. For example, in a production environment, you might use names like `MainCam`, `BackupCam`, `GraphicsOutput`, and so on to clearly identify different video feeds from the same machine.

1. Choose **Add output**.

## Next steps


After you [start your flow](flows-start.md), you should be able to see the MediaConnect NDI flow output as an available NDI source in your discovery server. You can then subscribe to it to receive NDI traffic. For more information, see the [NDI documentation](https://docs.ndi.video/all/developing-with-ndi/introduction).