

# Planning a MediaLive workflow
<a name="container-planning-workflow"></a>

From the point of view of AWS Elemental MediaLive, a live streaming workflow that includes MediaLive involves three systems: 
+ An *upstream system* that provides the video content to MediaLive.
+ MediaLive, which ingests the content and transcodes the content.
+ A *downstream system* that is the destination for the output that MediaLive produces.

You should plan that workflow before you start to create the channel. As the first stage in that planning, you must set up the upstream and downstream systems. As the second stage, you must plan the channel itself—identify the content to extract from the source content, and plan the outputs to produce.

**Important**  
This procedure describes planning the workflow starting from the output and then working back to the input. This is the most effective way to plan a workflow.

**Topics**
+ [Preparing the upstream and downstream systems in a workflow](container-planning-uss-dss.md)
+ [Planning the outputs in the channel](planning-the-channel-in-workflow.md)

# Preparing the upstream and downstream systems in a workflow
<a name="container-planning-uss-dss"></a>

As the first stage in planning the workflow, you must set up the upstream and downstream systems. 

**Important**  
This procedure describes planning the workflow starting from the output and then working back to the input. This is the most effective way to plan a workflow.

**To plan the workflow**

1. Identify the output groups that you need to produce, based on the systems that are downstream of MediaLive. See [Identify the output group types for the downstream system](identify-downstream-system.md).

1. Identify the requirements for the video and audio encodes that you will include in each output group. See [Identify the encode requirements for the output groups](identify-dss-video-audio.md).

1. Decide on the channel class—decide if you want to create a standard channel that supports redundancy or a single-pipeline channel that doesn't support redundancy. See [Identify resiliency requirements](plan-redundancy.md).

1. Assess the source content to make sure it's compatible with MediaLive and with the outputs that you need to create. For example, make sure that the source content has a video codec that MediaLive supports. See [Assess the upstream system](evaluate-upstream-system.md).

   After you have performed these four steps, you know whether MediaLive can handle your transcoding request.

1. Collect identifiers for the source content. For example, ask the operator at the upstream system for the identifiers for the different audio languages that you want to extract from the content. See [Collect information about the source content](planning-content-extract.md).

1. Coordinate with the downstream system or systems to provide a destination for the output groups that MediaLive will produce. See [Coordinate with downstream systems](setting-up-downstream-system.md).

# Identify the output group types for the downstream system
<a name="identify-downstream-system"></a>

The first step in planning any AWS Elemental MediaLive workflow is to determine which types of [*output groups*](what-is-terminology.md) you need to produce, based on the requirements and capabilities of the systems that are downstream of MediaLive.

Perform this work with the downstream system before you assess the [upstream system](evaluate-upstream-system.md). Decision making in a workflow starts with the downstream system, then works back to the upstream system.

**Important**  
You should have already identified the downstream system or systems that you are going to send MediaLive output to, for this workflow. If you have not yet identified the downstream system, you must do some research before continuing with preparing your workflow. This guide can't help you to identify your downstream system. When you know what your downstream systems are, return to this section.

**To identify the output group**

1. Obtain the following information from your downstream system.
   + The required output formats. For example, HLS.
   + The application protocol for each. For example, HTTP.

1. Decide on the delivery mode for your outputs.
   + You might have an output that is on a server that is on your EC2 instance in your VPC. Or you might have an output that is in Amazon S3. If one or both of these situations apply, you might want to set up for delivery via your VPC. For more information, see [Delivering outputs via your VPC](delivery-out-vpc.md).
   + If you don't have any of these types of outputs, you will deliver in the regular way.

1. Make sure that MediaLive includes an *output group *that supports the output format and protocol that the downstream system requires. See [Output types supported in MediaLive](outputs-supported-containers.md). 

1. If your preferred downstream system is another AWS media service, [read this for information about choosing the service](dss-choose-service.md). 

1. If your downstream system supports Microsoft Smooth Streaming, see [Options for handling Microsoft Smooth output](downstream-system-for-mss.md) for options.

1. If you want to send your output to other AWS Regions or to other AWS accounts before distribution, consider creating a MediaConnect Router output group. MediaConnect Router is an excellent choice for workflows that require cross-region or cross-account distribution.

1. Decide if you want to create an Archive output group in order to produce an archive file of the content. An archive file is a supplement to streaming; it isn't itself a streaming output. Typically, you create an archive file as a permanent file version of the streaming output. 

1. Decide if you want to create a Frame capture output group in order to produce a frame capture output. A Frame capture output is a supplement to streaming; it isn't itself a streaming output. This type of output might be useful for your workflow. For example, you might use a Frame capture output to create thumbnails of the content. 

1. Make a note of the output groups that you decide to create.

   For example, after you have followed these steps, you might have this list of output groups:
   + One HLS output group with AWS Elemental MediaPackage as the downstream system. 
   + One RTMP output group sending to the downstream system of a social media site.
   + One Archive output group as a record.

**Topics**
+ [Choosing among the AWS media services](dss-choose-service.md)
+ [Choosing between the HLS output group and MediaPackage output group](hls-choosing-hls-vs-emp.md)
+ [Options for handling Microsoft Smooth output](downstream-system-for-mss.md)

# Choosing among the AWS media services
<a name="dss-choose-service"></a>

If your preferred downstream system is another AWS media service, following are some useful tips for choosing the service to use: 
+ If you need to choose between AWS Elemental MediaPackage or AWS Elemental MediaStore for HLS outputs, follow these guidelines: 
  + Decide if you want to protect your content with a digital rights management (DRM) solution. DRM prevents unauthorized people from accessing the content. 
  + Decide if you want to insert ads in your content. 

  If you want either or both of these features, you should choose MediaPackage as the origin service because you will need to repackage the output. 

  If you do not want any of these features, you could choose MediaPackage or AWS Elemental MediaStore. AWS Elemental MediaStore is generally a simpler solution as an origin service, but it lacks the repackaging features of MediaPackage. 
+ If you have identified AWS Elemental MediaPackage as an origin service, decide if you will produce the HLS output using an HLS output group or a MediaPackage output group. For guidelines on making this choice, see the [next section](hls-choosing-hls-vs-emp.md).

# Choosing between the HLS output group and MediaPackage output group
<a name="hls-choosing-hls-vs-emp"></a>

If you want to deliver HLS output to AWS Elemental MediaPackage, you must decide if you want to create an HLS output group or a MediaPackage output group. 

## Delivering to MediaPackage v2
<a name="hls-choose-empv2"></a>

If you are delivering to a MediaPackage channel that uses MediaPackage v2, you must create an HLS output group. The MediaPackage operator can tell you if the channel uses version 2 of the API. One use case for using version 2 is to implement a glass-to-glass low latency workflow that includes both MediaLive and MediaPackage.

## Delivering to standard MediaPackage (v1)
<a name="hls-choose-emp"></a>

There are differences in the setup of each type of output group:
+ The MediaPackage output requires less setup. AWS Elemental MediaLive is already set up with most of the information that it needs to package and deliver the output to the AWS Elemental MediaPackage channel that you specify. This easier setup has benefits, but it also has drawbacks because you can't control some configuration. For information about how MediaLive sets up a MediaPackage output group, see [Result of this procedure](mediapackage-create-result.md).
+ For a MediaPackage output, the MediaLive channel and the AWS Elemental MediaPackage channel must be in the same AWS Region.
+ In a MediaPackage output, there are some restrictions on setting up ID3 metadata. For details, see [Working with ID3 metadata](id3-metadata.md). 

# Options for handling Microsoft Smooth output
<a name="downstream-system-for-mss"></a>

If you are delivering to a Microsoft Smooth Streaming server, the setup depends on whether you want to protect your content with a digital rights management (DRM) solution. DRM prevents unauthorized people from accessing the content. 
+ If you don't want to implement DRM, then create a Microsoft Smooth output group. 
+ If you do want to implement DRM, you can create an HLS or MediaPackage output group to send the output to AWS Elemental MediaPackage, then use AWS Elemental MediaPackage to add DRM. You will then set up AWS Elemental MediaPackage to deliver to the Microsoft Smooth origin server.

# Identify the encode requirements for the output groups
<a name="identify-dss-video-audio"></a>

After you have identified the output groups that you need to create, you must identify the requirements for the video and audio encodes that you will include in each output group. The downstream system controls these requirements.

Perform this work with the downstream system before you assess the [upstream system](evaluate-upstream-system.md). Decision making in a workflow starts with the downstream system, then works back to the upstream system.

**To identify the video and audio codecs in each output group**

Perform this procedure on every output group that you identified.

1. Obtain the following video information from your downstream system:
   + The video codec or codecs that they support. 
   + The maximum bitrate and maximum resolution that they can support.

1. Obtain the following audio information from your downstream system:
   + The supported audio codec or codecs.
   + The supported audio coding modes (for example, 2.0) in each codec.
   + The maximum supported bitrate for audio.
   + For an HLS or Microsoft Smooth output format, whether the downstream system requires that the audio is bundled in with the video or that each audio appears in its own rendition. You will need this information when you organize the assets in the MediaLive outputs.

1. Obtain the following captions information from your downstream system.
   + The captions formats that they support.

1. Verify the video. Compare the video codecs that your downstream system requires to the video codecs that MediaLive supports for this output group. See the tables in [Supported codecs by output type](outputs-supported-codecs.md). Make sure that at least one of the downstream system's offered codecs is supported. 

1. Verify the audio. Compare the audio codecs that your downstream system requires to the video codecs that MediaLive supports for this output group. See the tables in [Supported codecs by output type](outputs-supported-codecs.md). Make sure that at least one of the downstream system's offered codecs is supported. 

1. Skip assessment of the caption formats for now. You will assess those requirements in [a later section](assess-uss-captions.md).

1. Make a note of the video codecs and audio codecs that you can produce for each output group.

1. Decide whether you want to implement a trick-play track. For more information, see [Implementing a trick-play track](trick-play-solutions.md).

**Result of this step**

After you have performed this procedure, you will know what output groups you will create, and you will know which video and audio codecs those output groups can support. Therefore, you should have output information that looks like this example.


**Example**  

|  Output group   |  Downstream system  |  Video codecs supported by downstream system  | Audio codecs supported by downstream system | 
| --- | --- | --- | --- | 
|  HLS  |  MediaPackage  |  AVC  | AAC 2.0, Dolby Digital Plus | 
| RTMP | social media site | AVC | AAC 2.0 | 
| Archive | Amazon S3 | The downstream system doesn't dictate the codec—you choose the codec that you want. | The downstream system doesn't dictate the codec—you choose the codec that you want. | 

# Identify resiliency requirements
<a name="plan-redundancy"></a>

Resiliency is the ability of the channel to continue to work when problems occur. MediaLive includes two resiliency features that you must plan for now. You must decide which of these features you want to implement. You must make this decision now because these features affect how many sources you need for your content, which requires discussion with your upstream system.

## Pipeline redundancy
<a name="decide-resil-pipeline"></a>

You can usually set up a channel with two pipelines, to provide resiliency within the channel processing pipeline. For information about the requirements for setting up two pipelines, see 

Pipeline redundancy is a feature that applies to the entire channel and to all the inputs attached to the channel. Early on in your planning of the channel, you must decide how you want to set up the pipelines. 

You set up for pipeline redundancy by setting up the channel as a *standard channel* so that it has two encoding pipelines. Both pipelines ingest the source content and produce output. If the current pipeline fails, the downstream system can detect that it is no longer receiving content and can switch to the other output. There is no disruption to the downstream system. MediaLive restarts the second pipeline within a few minutes.

For more information about pipeline redundancy, see [Implementing pipeline redundancy](plan-redundancy-mode.md).

## Automatic input failover
<a name="decide-resil-aif"></a>

With some inputs, you can set up two inputs as an automatic input failover *pair*, in order to provide resiliency for one input in the channel.

Automatic input failover is a feature that applies to individual inputs. You don't have to make a decision about implementing automatic input failover when planning the channel. You can implement it later on, when attaching a new input, or when you want to upgrade an existing input so that it implements automatic input failover. 

To set up for automatic input failover, you set up two inputs (that have the exact same source content) as an *input failover pair*. Setting up this way provides resiliency in case of a failure in the upstream system, or between the upstream system and the channel. 

In the input pair, one of the inputs is the *active * input and one is on *standby*. MediaLive ingests both inputs, in order to always be ready to switch, but it usually discards the standby input immediately. If the active input fails, MediaLive immediately fails over and starts processing from the standby input, instead of discarding it.

You can implement automatic input failover in a channel that is set up for pipeline redundancy (a standard channel) or one that has no pipeline redundancy (a single-pipeline channel). 

For more information about automatic input failover, see [Implementing automatic input failover](automatic-input-failover.md).

## Comparison of the two features
<a name="resil-compare-features"></a>

Following is a comparison of pipeline redundancy and automatic input failover.
+ There is a difference in the failure that each feature deals with:

  Pipeline redundancy provides resiliency in case of a failure in the MediaLive encoder pipeline.

  Automatic input failover provides resiliency in case of a failure ahead of MediaLive, either in the upstream system or in the network connection between the upstream system and the MediaLive input.
+ Both features require two instances of the content source, so in both cases your upstream system must be able to provide two instances. 

  With pipeline redundancy, the two sources can originate from the same encoder. 

  With automatic input failover, the sources must originate from different encoders, otherwise both sources will fail at the same time, and the input failover switch will fail.
+ Pipeline redundancy applies to the entire channel. Therefore you should decide whether you want to implement it when you plan the channel. Automatic input failover applies only to specific input types. Therefore you could, for example, decide to implement automatic input failover only when you attach your most important input.
+ Automatic input failover requires that the downstream system be able to handle two instances of the output and be able to switch from one (when it fails) to the other. MediaPackage, for example, can handle two instances.

  If your downstream system doesn't have this logic built in, then you can't implement automatic input failover.

# Assess the upstream system
<a name="evaluate-upstream-system"></a>

As part of the planning of the MediaLive workflow, you must assess the upstream system that is the source of the content, to ensure that it is compatible with MediaLive. Then you must assess the source content to ensure that it contains formats that MediaLive can ingest and that MediaLive can include in the outputs you want. 

You obtain the *source content* from a *content provider*. The source content is provided to you from an *upstream system* that the content provider controls. Typically, you have already identified the content provider. For more information about source content and upstream systems, see [How MediaLive works](how-medialive-works-channels.md).

**To assess the upstream system**

1. Speak to the content provider to obtain information about the upstream system. You use this information to assess the ability of MediaLive to connect to the upstream system, and to assess the ability of MediaLive to use the source content from that upstream system.

   For details about the information to obtain and assess, see the following sections:
   + [Assess source formats and packaging](uss-obtain-info.md)
   + [Assess video content](assess-uss-source.md)
   + [Assess audio content](assess-uss-audio.md)
   + [Assess captions](assess-uss-captions.md)

1. Make a note of the MediaLive input type that you identify for the source content.

1. Make a note of the following three characteristics of the source stream. You will need this information [when you set up the channel](input-specification.md):
   + The video codec
   + The resolution of the video—SD, HD, or UHD
   + The maximum input bitrate 

**Result of this step**

At the end of this step, you will be confident that MediaLive can ingest the content. In addition you will have identified the following:
+ The type of MediaLive input you will create to ingest the source content.
+ The information that you need to extract the video, audio, and captions from the source (from the MediaLive input). For example:

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/evaluate-upstream-system.html)

# Assess source formats and packaging
<a name="uss-obtain-info"></a>

Consult the following table for information about how to assess the source formats and packaging. Read across each row.


****  

| Information to obtain | Verify the following | 
| --- | --- | 
| Number of sources that the content provider can provide. | If you plan to implement a [resiliency feature](plan-redundancy.md), make sure that your content provider can deliver the required inputs:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/uss-obtain-info.html) | 
| Delivery formats and protocolsThe type of MediaLive input that applies to the format you identify | Find out what format and protocol the upstream system supports for delivery. Make sure that this format is listed in the table in [Input types, protocols, and upstream systems](inputs-supported-formats.md). [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/uss-obtain-info.html)Note that you don't need to verify this information for content delivered over CDI or content delivered from an AWS Elemental Link. MediaLive can always handle these input types. | 
| Whether the upstream system is using the latest SDK | Make sure that the content provider is using the latest version of the [AWS CDI SDK](https://aws.amazon.com/media-services/resources/cdi/) on their upstream CDI source device. | 
| Whether the source content is a stream or VOD asset | Find out if the source content is a live stream or a VOD asset.Make sure that MediaLive supports the delivery for the format that you identified. See the table in [Support for live and file sources](inputs-live-vs-file.md).  | 
| Whether the content is encrypted | MediaLive can ingest encrypted content only from HLS content.If the source content is HLS and it is encrypted, make sure that it is encrypted in a format that MediaLive supports. See [Handling encrypted source content in an HLS source](planning-hls-input-encrypted.md). If MediaLive doesn't support the available encryption format, find out if you can obtain the content in unencrypted form. | 
| Only if the source content is RTP, whether it includes FEC. |  We recommend that the source content include FEC because it is less likely to result in an output that has visual disruptions.  | 

# Handling encrypted source content in an HLS source
<a name="planning-hls-input-encrypted"></a>

MediaLive can ingest an HLS source that is encrypted according to the HTTP Live Streaming specification.

**Supported encryption format**

MediaLive supports the following format for encrypted HLS sources:
+ The source content is encrypted with AES-128. MediaLive doesn't support AES-SAMPLE. 
+ The source content is encrypted using either static or rotating keys.
+ The manifest includes the `#EXT-X-KEY `tag with these attributes:
  + The `METHOD` attribute specifies AES-128.
  + The URI specifies the license server for the encryption key.
  + The IV is blank or specifies the initialization vector (IV) to use. If the IV is blank, MediaLive uses the value in the `#EXT-X-MEDIA-SEQUENCE` tag as the IV.
+ If both the upstream system and the license server require authentication credentials (user name and password), make sure that the same credentials are used on both servers. MediaLive does not support having different credentials for these two servers.

** How decryption works**

The content owner sets up the main manifest to include the `#EXT-X-KEY` with the method (AES-128), the URL to the license server, and the initialization vector (IV). The content owner places the encryption keys on the license server. When the MediaLive channel that uses this source starts, MediaLive obtains the main manifest and reads the `#EXT-X-KEY `tag for the URL of the license server. 

MediaLive connects to the license server and obtains the encryption key. MediaLive starts pulling the content from the upstream system, and decrypts the content using the encryption key and the IV. 

# Assess video content
<a name="assess-uss-source"></a>

Consult the following table for information about how to assess video source. Read across each row.

**Note**  
You don't need to perform any assessment of the video being delivered over CDI or from an AWS Elemental Link device. These sources are always acceptable to MediaLive.


****  

| Information to obtain | Verify the following | 
| --- | --- | 
| The available video codecs or formats. | Make sure that at least one of the video codecs is included in the list of video codecs for the package format. See [Supported codecs by input type](inputs-supported-codecs-by-input-type.md). If the content is available in more than one supported codec, decide which single video codec you want to use. You can extract only one video asset from the source content. | 
| The maximum expected bitrate. | Make sure that the bandwidth between the upstream system and MediaLive is sufficient to handle the anticipated maximum bitrate of the source content.If you are setting up standard channels (to implement [pipeline redundancy](plan-redundancy.md)), make sure that the bandwidth is double the anticipated maximum bitrate because there are two pipelines. | 
| Whether the video characteristics change in the middle of the stream.  | For best results, verify that the video characteristics of the video source don't change in the middle of the stream. For example, the codec should not change. The frame rate should not change. | 

# Assess audio content
<a name="assess-uss-audio"></a>

Consult the following table for information about how to assess the audio source. Read across each row.

**Note**  
You don't need to perform any assessment of the audio being delivered over CDI or from an AWS Elemental Link device. These sources are always acceptable to MediaLive.


****  

| Information to obtain | Verify the following | 
| --- | --- | 
| The available audio codecs or formats. | Make sure that at least one of the audio codecs is included in the list of audio codecs in [Supported codecs by input type](inputs-supported-codecs-by-input-type.md).  | 
| The available languages for each codec. For example, English, French. | Identify the languages that you would like to offer. Determine which of these languages the content provider can provide.  | 
| The available coding modes (for example, 2.0 and 5.1) for each codec. |  Identify the audio coding modes that you prefer for each audio language. Determine which of these coding modes the content provider can provide. For more information, see the [section after this table](#coding).   | 
| Whether the audio characteristics change in the middle of the stream.  |  For best results, verify that the audio characteristics of the source content don't change in the middle of the stream. For example, the codec of the source should not change. The coding mode should not change. A language should not disappear.  | 
| If the source content is HLS, whether the audio assets are in an audio rendition group or multiplexed with video.  |  MediaLive can ingest audio assets that are in a separate rendition group or multiplexed into a single stream with the video.  | 

**To decide on a coding mode**  
If multiple coding modes are available for the same language, decide which mode you want to use. Follow these guidelines:
+ You can extract some languages in one codec and coding mode, and other languages in another codec and coding mode. For example, you might want one or two languages available in 5.1 coding mode, and want other languages in 2.0 coding mode. 
+ You can extract the same language more than once. For example, you might want one language in both 5.1 coding mode and coding mode 2.0.
+ When deciding which codec and coding mode to extract for a given language, consider the coding mode you want for that language in the output. For each language, it is always easiest if the coding mode of the source content matches the coding mode of the output, because then you don't have to remix the audio in order to convert the coding mode. MediaLive supports remix, but remixing is an advanced feature that requires a good understanding of audio.

For example, in the output, you might want the one language to be in coding mode 5.1. You might want other languages to be available in coding mode 2.0.

Therefore you might choose to extract the following:
+ Spanish in Dolby Digital 5.1
+ French and English in AAC 2.0.

# Assess captions
<a name="assess-uss-captions"></a>

If you plan to include captions in an output group, you must determine if MediaLive can use the captions format in the source to produce the captions format that you want in the output. 

Obtain the following information about the captions source.


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/assess-uss-captions.html)

**To assess the captions requirements**

Follow these steps for each [output group that you identified](identify-downstream-system.md) for your workflow.

1. Go to [Captions supported in MediaLive](supported-captions.md) and find the section for the output group. For example, find [Captions formats supported in HLS or MediaPackage outputs](supported-formats-hls-output.md). In the table in that section, read down the first column to find the format (container) that the content provider is providing. 

1. Read across to the *Source caption input* column to find the caption formats that MediaLive supports in that source format.

1. Then read across to the *Supported output captions* column to find the caption formats that MediaLive can convert the source format to.

   You end up with a statement such as: "If you want to produce an HLS output and your source content is RTMP, you can convert embedded captions to burn-in, embedded, or WebVTT".

1. Verify that the source content from the content provider matches one of the formats in the *Supported caption input* column of the table. For example, verify that the source content contains embedded captions.

1. Find the list of captions formats that the downstream system supports. You obtained this list when you [identified the encode requirements for the output groups that you identified](identify-dss-video-audio.md). Verify that at least one of these output formats appears in the *Supported output captions* column of the table.

   If there is no match in the source content, or no match in the output, then you can't include captions in the output.

For example, assume that you need to produce an HLS output group. Assume that your content provider can give you content in RTP format with embedded captions. Assume that the downstream system requires that for HLS output, the output must include WebVTT captions.

Following the steps above, you read the table for HLS outputs. In the container column of the table, you find the row for RTP format. You read across to the source column and identify that embedded captions are a supported source format. You then read across to the output column and find that embedded captions can be converted to burn-in, embedded, or WebVTT captions. WebVTT captions is the format that the downstream system requires. Therefore, you conclude that you can include captions in the HLS output.

# Collect information about the source content
<a name="planning-content-extract"></a>

After you have assessed the source content and have identified suitable video, audio, and captions assets in that content, you must obtain information about those assets. The information you need is different for each type of source. 

You don't need this information to [create the input](medialive-inputs.md) in MediaLive. But you will need this information when you [attach the input](creating-a-channel-step2.md) to the channel in MediaLive.

**Result of this step**  
After you have performed the procedures in this step, you should have source content information that looks like this example.


**Example**  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/planning-content-extract.html)

**Topics**
+ [Identifying content in a CDI source](extract-contents-cdi.md)
+ [Identifying content in an AWS Elemental Link source](extract-contents-link.md)
+ [Identifying content in an HLS source](extract-contents-hls.md)
+ [Identifying content in a MediaConnect source](extract-content-emx.md)
+ [Identifying content in an MP4 source](extract-contents-mp4.md)
+ [Identifying content in an RTMP source](extract-contents-rtmp.md)
+ [Identifying content in an RTP source](extract-contents-rtp.md)
+ [Identifying content in a SMPTE 2110 source](extract-contents-s2110.md)
+ [Identifying content in an SRT source](extract-contents-srt.md)

# Identifying content in a CDI source
<a name="extract-contents-cdi"></a>

The content in a CDI source always consists of uncompressed video, uncompressed audio, and captions. 

Obtain identifying information from the content provider.


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/extract-contents-cdi.html)

# Identifying content in an AWS Elemental Link source
<a name="extract-contents-link"></a>

The content in an AWS Elemental Link source is always a transport stream (TS) that contains one video asset, one audio pair, and optional captions. 

Obtain identifying information from the content provider.


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/extract-contents-link.html)

Also obtain the following information about the content:
+ The maximum bitrate. You will have the option to throttle this bitrate when you set up the device in MediaLive. For more information, see [Setting up AWS Elemental Link](setup-devices.md). 
+ Whether the content includes an embedded timecode. If it does, you can choose to use that timecode. For more information, see [Timecode configuration](https://docs.aws.amazon.com/medialive/latest/ug/timecode.html)[Working with timecodes and timestamps](timecode.md). 
+ Whether the content includes ad avail messages (SCTE-104 messages that MediaLive will automatically convert to SCTE-35 messages). For more information about ad avail messages, see [Processing SCTE 35 messages](scte-35-message-processing.md).

# Identifying content in an HLS source
<a name="extract-contents-hls"></a>

The content in an HLS container is always a transport stream (TS) that contains only one video rendition (program). 

Obtain identifying information from the content provider.


****  

|  Asset  |  Details  | Information to obtain | 
| --- | --- | --- | 
| Video | You don't need identifying information. MediaLive always extracts the single video asset. |  | 
| Audio | The source might include multiple audio PIDs. | Obtain the PIDs or three-character language codes of the languages that you want. We recommend that you obtain the PIDs for the audio assets. They are a more reliable way of identifying an audio asset.  | 
| Captions | Embedded | Obtain the languages in the channel numbers. For example, "channel 1 is French" | 

# Identifying content in a MediaConnect source
<a name="extract-content-emx"></a>

The content in an AWS Elemental MediaConnect source is always a transport stream (TS). The TS is made up of one program (SPTS) or multiple programs (MPTS). Each program contains a combination of video, audio, and optional captions.

Obtain identifying information from the content provider.


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/extract-content-emx.html)

# Identifying content in an MP4 source
<a name="extract-contents-mp4"></a>

The content in an MP4 source always consists of one video track, one or more audio tracks, and optional captions. 

Obtain identifying information from the content provider.


****  

|  Asset  |  Details  | Information to obtain | 
| --- | --- | --- | 
| Video | You don't need identifying information. MediaLive always extracts the single video asset. | None | 
| Audio | The source might include multiple audio tracks, typically, one for each language.  | Obtain the track numbers or three-character language codes of the languages that you want. | 
| Captions | EmbeddedThe captions might be embedded in the video track or might be embedded in an ancillary track. | Obtain the languages in the channel numbers. For example, "channel 1 is French".  | 

# Identifying content in an RTMP source
<a name="extract-contents-rtmp"></a>

This procedure applies to both RTMP push and pull inputs from the internet, and to RTMP inputs from Amazon Virtual Private Cloud. The content in an RTMP input always consists of one video, one audio, and optional captions. 

Obtain identifying information from the content provider.


****  

|  Asset  |  Details  | Information to obtain | 
| --- | --- | --- | 
| Video | You don't need identifying information. MediaLive always extracts the single video asset. | None | 
| Audio | You don't need identifying information. MediaLive always extracts the single audio asset | Obtain the numbers and languages of the tracks. For example, "track 1 is French".  | 
| Captions | EmbeddedThe captions might be embedded in the video track or might be embedded in an ancillary track. | Obtain the languages in the channel numbers. For example, "channel 1 is French".  | 

# Identifying content in an RTP source
<a name="extract-contents-rtp"></a>

This procedure applies to both RTP inputs from the internet and inputs from Amazon Virtual Private Cloud. The content in an RTP input is always a transport stream (TS). The TS is made up of one program (SPTS) or multiple programs (MPTS). Each program contains a combination of video, a combination of audio, and optional captions. 

Obtain identifying information from the content provider.


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/extract-contents-rtp.html)

# Identifying content in a SMPTE 2110 source
<a name="extract-contents-s2110"></a>

The content in a SMPTE 2110 source is always a set of streams consisting of one video asset, zero or more audio assets, and zero or more captions (ancillary data) assets. Each asset is in its own stream. 

Obtain identifying information from the content provider.


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/extract-contents-s2110.html)

# Identifying content in an SRT source
<a name="extract-contents-srt"></a>

The content in an SRT input is always a transport stream (TS). The TS is made up of one program (SPTS) or multiple programs (MPTS). Each program contains a combination of video, a combination of audio, and optional captions. 

Obtain identifying information from the content provider.


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/extract-contents-srt.html)

# Coordinate with downstream systems
<a name="setting-up-downstream-system"></a>

As the final step in preparing the downstream and upstream systems in your workflow, you must speak to the operator of the downstream system and coordinate information.

The *output* from MediaLive is considered *input* to the downstream system.

The setup is different for each type of output group and downstream system. For more information, see [Setup: Creating output groups and outputs](medialive-outputs.md), and go to the section for the type of output group that you are creating. Read the information about coordinating with the downstream system. 

# Planning the outputs in the channel
<a name="planning-the-channel-in-workflow"></a>

You should plan the AWS Elemental MediaLive channel as the second stage of planning a transcoding *workflow*. You should have already performed the first stage of setting up the upstream and downstream systems, as described in [Preparing the upstream and downstream systems in a workflow](container-planning-uss-dss.md).

The channel provides the ability to configure for different characteristics of the outputs, and for including a wide array of video features. But before you plan these details, you should plan the basic features for the channel.

**Note**  
On the output side, we refer to each video or audio or caption stream, track, or program as an *encode*.

**Topics**
+ [Identify the output encodes](planning-encodes.md)
+ [Map the output encodes to the sources](channel-map-output-source.md)
+ [Design the encodes](designing-encodes.md)

# Identify the output encodes
<a name="planning-encodes"></a>

When you prepared the downstream systems, you [identified the output groups](identify-downstream-system.md) that you need. Now, as part of the planning of the channel, you must identify the encodes to include in each output group you have decided to create. An *encode* refers to the audio, video, or captions streams in the output.

**Topics**
+ [Identify the video encodes](channel-planning-video-encodes.md)
+ [Identify the audio encodes](channel-planning-audio-encodes.md)
+ [Identify the captions encodes](channel-planning-captions-encodes.md)
+ [Summary of encode rules for output groups](encode-rules.md)
+ [Example of a plan for output encodes](plan-encodes-example.md)

# Identify the video encodes
<a name="channel-planning-video-encodes"></a>

You must decide on the number of video encodes and their codecs. Follow this procedure for each output group. 

1. Determine the maximum number of encodes that are allowed in the output group. The following rules apply for each type of output group.  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/channel-planning-video-encodes.html)

1. If the output group allows more than one video encode, decide how many you want. Keep in mind that you can create multiple output encodes from the single video source that MediaLive ingests.

1. Identify the codec or codecs for the video encodes. 
   + For most types of output groups, the downstream system dictates the codec for each video encode, so you obtained this information when you [identified the output encodes](#channel-planning-video-encodes). 
   + For an Archive output group, you decide which codec suits your purposes.

1. Identify the resolution and bitrate for each video encode. You might have obtained requirements or recommendations from your downstream system when you [identified the output encodes](#channel-planning-video-encodes).

1. Identify the frame rates for each video encode. If you are using more than one video encode, you can ensure compatibility by choosing output frame rates that are multiples of the lowest frame rate used. 

   Examples:
   + 29.97 and 59.94 frames per second are compatible frame rates.
   + 15, 30, and 60 frames per second are compatible frame rates.
   + 29.97 and 30 frames per second are *not* compatible frame rates.
   + 30 and 59.94 frames per second are *not* compatible frame rates. 

    

# Identify the audio encodes
<a name="channel-planning-audio-encodes"></a>

You must decide on the number of audio encodes. Follow this procedure for each output group. 

1. Determine the maximum number of encodes that are allowed in the output group. The following rules apply for each type of output group.  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/channel-planning-audio-encodes.html)

1. If the output group allows more than one audio encode, decide how many you want. These guidelines apply:
   + Each different combination of output codec, coding mode, and language is one encode.

     MediaLive can produce a specific coding mode only if the source contains that coding mode or a higher mode. For example, MediaLive can create 1.0 from a 1.0 or a 2.0 source. It can't create 5.1 from a 2.0 source. 
   + MediaLive can produce a specific language only if the source contains that language. 
   + MediaLive can produce more than one encode for a given language. 

     For example, you could choose to include Spanish in Dolby 5.1 and in AAC 2.0.
   + There is no requirement for the count of encodes to be the same for all languages. For example, you could create two encodes for Spanish, and only one encode for the other languages.

1. Identify the bitrate for each audio encode. You might have obtained requirements or recommendations from your downstream system when you [identified the output encodes](#channel-planning-audio-encodes). 

# Identify the captions encodes
<a name="channel-planning-captions-encodes"></a>

You must decide on the number of captions encodes. Follow this procedure for each output group. 

1. Determine the maximum number of captions encodes that are allowed in the output group. The following rules apply for each type of output group.  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/channel-planning-captions-encodes.html)

1. Identify the category that each caption format belongs to. See the list in [Captions categories](categories-captions.md). For example, WebVTT captions are sidecar captions.

1. Use this category to identify the number of captions encodes you need in the output group.
   + For embedded captions, you always create one captions encode.
   + For object-style captions and sidecar captions, you create one captions encode for each format and language that you want to include.

# Summary of encode rules for output groups
<a name="encode-rules"></a>

 This table summarizes the rules for encodes for each output group. In the first column, find the output group that you want, then read across the row.


****  

| Type of output group | Rule for video encodes | Rule for audio encodes | Rule for captions encodes | 
| --- | --- | --- | --- | 
| Archive | One or more video encodes. | Zero or more audio encodes. | Zero or more captions encodes. The captions are either embedded or object-style captions. | 
| CMAF Ingest | One or more video encodes. Typically, there are multiple video encodes. | Zero or more audio encodes. Typically, there are multiple audio encodes.  | Zero or more captions encodes. Typically, there are caption languages to match the audio languages. The captions are embedded or sidecar captions. | 
| Frame Capture | One video encode. | Zero audio encodes. | Zero captions encodes. | 
| HLS or MediaPackage | One or more video encodes. Typically, there are multiple video encodes. | Zero or more audio encodes. Typically, there are multiple audio encodes.  | Zero or more captions encodes. Typically, there are caption languages to match the audio languages. The captions are either embedded or sidecar captions. | 
| Microsoft Smooth | One or more video encodes. Typically, there are multiple video encodes. | Zero or more audio encodes. Typically, there are multiple audio encodes.  | Zero or more captions encodes. Typically, there are caption languages to match the audio languages. The captions are always sidecar captions. | 
| RTMP |  One video encode.  | Zero or one audio encodes.  | Zero or one caption encodes. The captions are either embedded or object-style captions. | 
| SRT caller |  One or more video encodes.  | One or more audio encodes. | Zero or more captions encodes. The captions are either embedded or object-style captions. | 
| UDP |  One or more video encodes.   | One or more audio encodes.  | Zero or more captions encodes. The captions are either embedded or object-style captions. | 

Some output groups also support audio-only outputs. See [Setting up the output](audio-only-outputs-and-outputgroups.md).

Some output groups also support outputs that contain JPEG files, to support trick play according to the Roku specification. See [Trick-play track via the Image Media Playlist specification](trick-play-roku.md).

# Example of a plan for output encodes
<a name="plan-encodes-example"></a>

After you have performed this procedure, you should have information that looks like this example.


**Example**  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/plan-encodes-example.html)

# Map the output encodes to the sources
<a name="channel-map-output-source"></a>

In the first step of planning the channel, you identified the number of encodes you need in each output group. You must now determine which assets from the source you can use to produce those encodes.

**Result of this procedure**  
After you have performed this procedure, you will have identified the following key components that you will create in the channel:
+ The video input selectors 
+ The audio input selectors
+ The captions input selectors

Identifying these components is the last step in planning the *input* side of the channel. 

**To map the output to the sources**

1. Obtain the *list of output encodes* you want to produce. You created this list in the [previous step](planning-encodes.md). It is useful to organize this list into a table. For example:  
**Example**    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/channel-map-output-source.html)

1. Obtain the *list of sources* that you created when you assessed the source content and collected identifiers. For an example of such a list, see [Assess the upstream system](evaluate-upstream-system.md).

1. In your table of output encodes, add two more columns, labeled *Source* and *Identifier in source*. 

1. For each encode (column 2), find a line in the *list of sources* that can produce that encode. Add the source codec and the identifier of that source codec. This example shows a completed table.  
**Example**    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/channel-map-output-source.html)

   You will use this information when you create the channel:
   + You will use the source and source identifier information when you [create the input selectors](input-video-selector.md).
   + You will use the characteristics information when you [create the encodes](creating-a-channel-step6.md) in the output groups.

1. After you have identified the source assets, group those assets that are being used more than once, to remove the duplicates.

1. Label each asset by its type—video, audio, or captions.  
**Example**    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/channel-map-output-source.html)

## Example of mapping
<a name="channel-map-example"></a>

The following diagrams illustrate the mapping of the output encodes back to source assets. The first diagram shows the outputs (at the top) and the sources (at the bottom). The other three diagrams shows the same outputs and sources with the mappings for video, for audio, and for captions.

**Encodes and assets**

![\[Diagram showing HLS, RTMP, and Archive sections with various video, audio, and caption sources.\]](http://docs.aws.amazon.com/medialive/latest/ug/images/channel-design-map-in-out.png)


**Mapping video encodes to assets**

![\[Diagram showing video, audio, and caption sources mapped to HLS, RTMP, and Archive outputs.\]](http://docs.aws.amazon.com/medialive/latest/ug/images/channel-design-map-in-out-V.png)


**Mapping audio encodes to assets**

![\[Diagram showing audio and video sources mapped to HLS, RTMP, and Archive outputs.\]](http://docs.aws.amazon.com/medialive/latest/ug/images/channel-design-map-in-out-A.png)


**Mapping captions encodes to assets**

![\[Diagram showing video, audio, and caption sources mapped to HLS, RTMP, and Archive outputs.\]](http://docs.aws.amazon.com/medialive/latest/ug/images/channel-design-map-in-out-C.png)


# Design the encodes
<a name="designing-encodes"></a>

In the first step of planning the channel, you [identified](planning-encodes.md) the video, audio, and captions encodes to include in each output group. In the second step, you organized these encodes into outputs in each output group. 

Now in this third step, you must plan the configuration parameters for each encode. As part of this plan, you identify opportunities for sharing encodes among outputs in the same output group in the channel, and among outputs in different output groups in the channel.

**Result of this procedure**  
After you have performed this procedure, you will have a list of video, audio, and captions encodes to create.

**Topics**
+ [Plan the encodes](plan-encodes.md)
+ [Identify encode sharing opportunities](plan-encode-sharing.md)

# Plan the encodes
<a name="plan-encodes"></a>

In [Map the output encodes to the sources](channel-map-output-source.md), you sketched out a plan for the encodes you want to create in each output group. Below is the example of the plan from that step, showing the outputs and encodes, and the sources for those encodes.

At some point, you must fill in the details for the encodes identified in the second and third columns of this table. You have a choice:
+ You can decide these details now. 
+ You can decide the details later, when you are actually creating the channel. If you decide to do this, we recommend you still read the procedures after the table, to get an idea of what is involved in defining an encode.


**Example**  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/medialive/latest/ug/plan-encodes.html)

**Design the details for each video encode**

For each video encode in your table, you have already identified the source asset, codec, resolution and bitrate. You must now identify all the other encoding parameters you need to set.

Follow this procedure for each individual video encode.

1. Look at the fields in the video encode section of each output. To view these fields, follow these steps. Don't worry about not completing all the sections. You only want to display the video encode fields, and you will then cancel the channel.
   + On the MediaLive home page, choose **Create channel**, and in the navigation pane, choose **Channels**. 

     If you've created a channel before, you won't see the home page. In that case, in the MediaLive navigation pane, choose **Channels**, and then choose **Create channel**.
   + On the **Create channel** page, under **Output groups**, choose **Add**. 

     Don't worry that you haven't completed any of the earliers sections in the channel. You are only trying to display all the fields for the video encode.
   + In the **Add output group** section, choose **HLS** and choose **Confirm**.
   + Under that output group, choose **Output 1**.
   + In the **Output** section, go to the **Stream settings** section, and choose the **Video** link. 
   + In the **Codec settings** field, choose the codec that you want for this video encode. More fields appear. Choose the field labels for all the sections to display all the fields.

1. In each section, determine whether you need to change the defaults. 
   + Many of the fields have defaults, which means you can leave the field value as is. For details about a field and its default value, choose the **Info** link next to the field.
   + There are some fields that you might need to set according to instructions from your downstream system, to match the expectations of the downstream system.
   + There are some fields where the value you enter affects the output charges for this channel. These are:
     + The **Width** and **Height** fields (which define the video resolution).
     + The **Framerate** fields.
     + The **Rate control** fields.

     For information about charges, see [the MediaLive price list](https://aws.amazon.com/medialive/pricing/).
   + You can read about some of the fields in the following sections:
     + For information about the **Color space** fields, see [Handling complex color space conversions](color-space.md).
     + For information about the Additional encoding settings fields, see [Setting up enhanced VQ mode](video-enhancedvq.md)
     + For information about the **Rate control** fields, see [Setting the rate control mode](video-encode-ratecontrol.md). There are fields in this section that affect the output charges for this channel. For more information about charges, see [the MediaLive price list](https://aws.amazon.com/medialive/pricing/).
     + For information about the **Timecode** fields, see [Working with timecodes and timestamps](timecode.md).

1. Make detailed notes about the values for all the fields you plan to change. Do this for every video encode that you identified.

**Design the details for each audio encode**

For each audio encode in your table, you have already identified the source asset, codec and bitrate. You must now identify all the other encoding parameters you need to set.

Follow this procedure for each individual audio encode.

1. Look at the fields in the audio encode section of each output. To view these fields, follow the same steps as for the video encodes, but choose the **Audio 1** link. 

   With audio encodes, there aren't many fields for each code. But the fields for the codecs are very different from each other.

1. Study the fields and make notes. 

**Design the details for each captions encode**

For each captions encode in your table, you have already identified the source captions, format, and language. You must now identify all the other encoding parameters you need to set.

Follow this procedure for each individual captions encode.

1. Look at the fields in the captions encode section of each output. To view these fields, follow the same steps as for the video encodes, but choose Add caption to add a captions section, because there is no captions section by default. 

   With captions encodes, there aren't many fields for each captions format. But the fields for the formats are very different from each other.

1. Study the fields and make notes. 

# Identify encode sharing opportunities
<a name="plan-encode-sharing"></a>

If you have already identified the details for all the output encodes, you can now identify opportunities for encode sharing. 

If you plan to identify details later, we recommend that you come back to this section to identify opportunties. 

Read about encode sharing and encode cloning in [Sharing encodes among outputs](feature-share-encode.md).

You will use encode sharing and encode cloning when you create the encodes in the channel, starting with [Set up the video encode](creating-a-channel-step6.md).
+ When you have a complete list, compare the values for the encodes:
  + If you have two (or more) encodes with identical values, you can share the encode. When you create the channel, you can create this encode once, in one output. You can then reuse that encode in other outputs. The procedure for creating the encode provides detailed instructions for reusing.

    Keep in mind that two encodes are identical only if they are identical in all their fields, including sharing the same video source. For example, in the sample table earlier in this section, the first video encode for HLS and the video encode for RTMP share the same video source.
  + If you have two (or more) encodes with nearly identical values, you can clone an encode to create a second encode, and then change specific fields in the second encode. The procedure for creating the encode provides detailed instructions for cloning.

  Then identify opportunities for sharing, in the same way as you did for the video encodes. Keep in mind that two encodes are identical only if they are identical in all their fields, including sharing the same audio source. 

  Carefully identify the video encodes to share by noting the outputs and output groups each belongs to.

Then identify opportunities for sharing, in the same way as you did for the video encodes. Keep in mind that two encodes are identical only if they are identical in all their fields, including sharing the same captions source. 

**Example**

Following from the example in the earlier steps in this section about channel planning, you might decide you have these opportunities shown in the last two columns of this table.


****  

| Encode nickname |  Characteristics of the encode  | Source | Opportunity | Action | 
| --- | --- | --- | --- | --- | 
| VideoA |  AVC 1920x1080, 5 Mbps  | HEVC  |  | Create this encode from scratch. | 
| VideoB |  AVC 1280x720, 3 Mbps  | HEVC  | Clone | Clone VideoA and change the bitrate. Perhaps also other fields. | 
| VideoC | AVC 320x240, 750 Kbps | HEVC  | Clone | Clone VideoA and change the bitrate and perhaps other fields. | 
| AudioA | AAC 2.0 in English at 192000 bps | AAC 2.0 |  | Create this encode from scratch. | 
| AudioB | AAC 2.0 in French at 192000 bps | AAC 2.0  | Clone | Clone AudioA and change the audio selector (the reference to the source) to the selector for French. Perhaps also change other fields. | 
| CaptionsA |  WebVTT (object-style) converted from embedded, in English  | Embedded |  | Create this encode from scratch. | 
| CaptionsB | WebVTT (object-style) converted from embedded, in French | Embedded | Clone | Clone CaptionsC and change the captions selector (the reference to the source) to the selector for French. Perhaps also change other fields. | 
| VideoD | AVC 1920x1080, 5Mbps  | HEVC  | Share | Share VideoA | 
| AudioC | Dolby Digital 5.1 in Spanish | Dolby Digital 5.1  |  | Create this encode from scratch. | 
| CaptionsC | RTMP CaptionInfo (converted from embedded) in Spanish | Embedded | Clone | Clone CaptionsA and change the captions selector (the reference to the source) to the selector for Spanish. Perhaps also change other fields. | 
| VideoE | AVC, 1920x1080, 5 Mbps | HEVC  | Share | Share VideoA | 
| AudioD | Dolby Digital 2.0 in Spanish  | AAC 2.0 |  | Create this encode from scratach. Although its source is the same as Aa, its output codec is different, which means all its configuration fields are different. Therefore, there is no advantage to cloning. | 
| AudioE | Dolby Digital 2.0 in French | AAC 2.0  | Clone | Clone AudioD and change the audio selector (the reference to the source) to the selector for French. Perhaps also change other fields.Don't clone AuduioB because AudioB and AudioA have different output codecs. Therefore, there is no advantage to cloning. | 
| AudioF | Dolby Digital 2.0 in English | AAC 2.0 | Clone | Clone AuduioD and change the audio selector (the reference to the source) to the selector for English. Perhaps also change other fields.Don't clone AudioB because AudioB and AudioF have different output codecs. Therefore, there is no advantage to cloning. | 
| CaptionsD | DVB-Sub (object-style) converted from Teletext, in 6 languages.  | Teletext |  | Create this encode from scratch. | 