

# Creating an Elemental Inference workflow
Creating a workflow

You must create a feed and enable at least one AI feature in that feed. After you have created the feed, you must associate one resource, which represents the source media that Elemental Inference will work on.

When you are ready, you must deliver the source media to Elemental Inference. Elemental Inference will produce metadata for each of the features that you set up. You must obtain that metadata and use it to produce the desired media, for example, to produce a video file of an event.

**Topics**
+ [

# Step A: Create the feed in Elemental Inference
](create-feed.md)
+ [

# Step B: Format the source media
](source-format.md)
+ [

# Step C: Deliver the source media
](deliver-source.md)
+ [

# Step D: Query the metadata
](query-metadata-query.md)

# Step A: Create the feed in Elemental Inference
Create the feed

You must create a feed that contains the features (outputs) that you want to use. After you've created the feed, you must associate a source media (resource) with it. 

You can create an Elemental Inference feed using the Elemental Inference console or the AWS CLI. 

**Topics**
+ [

# Creating using the console
](create-feed-console.md)
+ [

# Creating using the CLI
](create-feed-cli.md)
+ [

# Configuring each feature
](create-feed-outputs.md)

# Creating using the console


This section describes how to use the Elemental Inference console to create an Elemental Inference feed.

**Create the feed**

1. Open the Elemental Inference console at [https://console.aws.amazon.com/elemental-inference/](https://console.aws.amazon.com/elemental-inference/).

1. In the left navigation bar, choose **Feeds**. On the **Feeds** page, choose **Create**. 

1. Complete the fields:
   + Enter a friendly name for the feed. You might want to specify a name that helps you to identify the source media that you plan to use with this feed. For example, **feed-soccer**.
   + In **AI features** section, enable the features you want to use. Each feature becomes an output in the feed. See the sections after this procedure for information about specific configuration for a feature.
   + Optionally, associate tags with the feed.

1. Choose **Create feed**. The **Feeds** page appears showing a list with one line for each feed. After a few moments, the status of the feed you just created will be **Available**.

   **Available** means that the feed isn't currently associated with a source media.

**Associate the resource**

1. In **Feed association**, choose **Add association**. Enter a friendly name for the source media (resource) that you intend for this feed. You might want to specify a name that helps you to identify the feed that this source media belongs to. For example, **source-soccer**.

1. In the **Feed association** section, choose **Save** to confirm the association. The **Feed** information on the page is updated: 
   + In **Feed association**, the **Integration** field appears, showing the data endpoint for the feed.
   + In **General details**, the status of the feed changes to **Active**, which means that a resource is associated with the feed.
   + In **Outputs**, the status of each output changes to **Enabled**.

     If you want to disable an output or change any other information for the output, select the **Edit** button (a pencil) on the right. 

   For information about feed and output status, see [Lifecycle of an AWS Elemental Inference workflow](monitor-inference-feed-lifecycle.md).

1. Make a note of the data endpoint (in the **Integration** field). You will need this value in order to deliver the source media to Elemental Inference.

# Creating using the CLI


This section describes how to use the AWS CLI to create an Elemental Inference feed.

You must set up a fully-configured feed: resource - feed - output or outputs, where the MediaLive channel is the resource and each output represents one Elemental Inference feature.

1. Use `create-feed` to create a new feed. 

   Include one output for each feature you want to implement. Set the status to `ENABLED` in each output.

1. The response includes the following information that you should make a note of:
   + `id`: The feed ID, which you will need for CLI commands on this feed.
   + `arn`: The feed ARN. You can also obtain the ARN using `get-feed`.
   + `dataEndpoints`: The ARN of the data endpoint for this feed. You will use this ARN when you send source media to Elemental Inference for processing, and when you retrieve the metadata that is the result of this processing.

1. After the feed is created, the status of the feed will eventually change to `AVAILABLE`, indicating that it is ready to have a resource (source media) associated with it.

1. Use `associate-feed` to associate the source media with the feed. The source media is the resource for the feed. 

   You now have a useable feed: resource - feed - output.

# Configuring each feature


Following are details about how to configure each feature (output) that you include in a Elemental Inference feed. 

## Configuring event clipping


In **Callback config**, you can enter a string that you want Elemental Inference to always include in the event clipping metadata for this output. This information is useful when you later work with Elemental Inference events in Amazon EventBridge. You will be able to filter events using this information, in order to find the events for one feed. The string might identify the sports event in the feed, for example.

## Configuring smart crop


There is no specific configuration for smart crop.

# Step B: Format the source media
Format the source media

Make sure that the source media meets the requirements.

## Format requirements


You must format the source media according to the CMAF Ingest (Interface-1) version 1.2 specification. 

The following table identifies specific requirements for Elemental Inference.


| Characteristic | Requirement | 
| --- | --- | 
|  Media fragments  |  Fragmented CMAF Ingest containerized media fragments  | 
|  MovieFragmentBox  |  One per segment  | 
|  Initialization segment: naming  |  Include an initialization segment with each stream, as follows: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-inference/latest/userguide/source-format.html)  | 
|  Media segments: naming  |  Media segments after the initialization segment must following this naming pattern: `Streams default-<type>.<ext>/Segment(<sequence-number>)` Where: <type> is video or audio <ext> is cmfv or cmfa <sequence-number> must increase monotonically, although it doesn't have to be contiguous. Each sequence number must match the sequence number in the MovieFragmentHeader box. For example: `Streams default-video.cmfv/Segment(<sequence-number>)`  | 
|  End of Stream indicator  |  The last media segment in the session must be: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-inference/latest/userguide/source-format.html) If you can't signal the end of stream in this way, there is a workaround. See [Step C: Deliver the source media](deliver-source.md).  | 

## Example


The following code shows how to use FFMPG to format the media to follow these requirements. The commands demux, segment, and containerize the video and audio. Note the `-init_seg_name` and `-media_seg_name` lines

```
$ mkdir 'Streams(default-video.cmfv)'
$ ffmpeg -i input.mp4 \
-map 0:v:0 -c:v libx264 \
    -profile:v main -pix_fmt yuv420p \
    -g 30 -keyint_min 30 -sc_threshold 0 \
    -force_key_frames 'expr:gte(t,n_forced*1)' \
    -f dash -seg_duration 1 -use_timeline 0 \
    -use_template 1 -remove_at_exit 0  \
    -init_seg_name 'Streams(default-video.cmfv)/InitializationSegment' \
    -media_seg_name 'Streams(default-video.cmfv)/Segment($Number%09d$)' \
    'video.mpd'

$ mkdir 'Streams(default-audio.cmfa)'$ ffmpeg -i input.mp4 \
    -map 0:a:0 -c:a aac -ar 48000 -ac 2 \
    -f dash -seg_duration 1 -use_timeline 0 \
    -use_template 1 -remove_at_exit 0 \
    -init_seg_name 'Streams(default-audio.cmfa)/InitializationSegment' \
    -media_seg_name 'Streams(default-audio.cmfa)/Segment($Number%09d$)' \
    'audio.mpd'
```

# Step C: Deliver the source media
Deliver the source media

To deliver the source media, you must use an AWS API or SDK. You can't deliver the media using the Elemental Inference console.

1. Obtain the data endpoint for the feed:
   + Using an AWS API or SDK: Use the `GetFeed` operation of Elemental Inference. For information about the operation and parameters, see [https://docs.aws.amazon.com/elemental-inference/latest/APIReference/API_GetFeed](https://docs.aws.amazon.com/elemental-inference/latest/APIReference/API_GetFeed). 
   + Using the Elemental Inference console: Choose Feed in the left navigation bar, then select the feed. In the details page, the data endpoint is in the **Integration** field. 

1. Use the Elemental Inference PutMedia operation to deliver the source media to that data endpoint. Make sure to stay within the quotas for the Request rate for PutMedia and the Request rate for PutMedia (in a burst). To view the current quotas, open the [Service Quotas console](https://console.aws.amazon.com/servicequotas/home). In the navigation pane, choose **AWS services** and select **AWS Elemental Inference**.

## Streaming requirements


You must deliver the source media according to the CMAF Ingest (Interface-1) version 1.2 specification. 

The following table identifies specific requirements for Elemental Inference.


| Characteristic | Requirement | 
| --- | --- | 
| Media segment duration | 0-2 seconds | 
|  Ingestion order  |  Elemental Inference will ingest all media segments (audio and video) for a given sequence number before proceeding to the next sequence number  | 
|  End of Stream indicator (workaround)  |  Typically, the last media segment includes `lmsg`. However, if you can't signal the end of stream in this way (for example, you are using FFMG), then flush the Elemental Inference internal buffer as follows: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/elemental-inference/latest/userguide/deliver-source.html)  | 
| Manifest | CMAF Ingest doesn't support manifests. | 

## Example


The following code sample shows how to use CURL to use the PUT operation to send the content to the data endpoint of a feed. (The example assumes that you have exported your AWS credentials as environment variables, which is standard practice when using the AWS REST API.)

```
# Initialization
$ awscurl --region <region> --service elemental-inference -X PUT \
  'https://<data-endpoint>/v1/feed/<feed-id>/input/0/media/Streams(default-audio.cmfa)/InitializationSegment' \
  --data-binary -d '@Streams(default-audio.cmfa)/InitializationSegment'

$ awscurl --region <region> --service elemental-inference -X PUT \
  'https://<data-endpoint>/v1/feed/<feed-id>/input/0/media/Streams(default-video.cmfv)/InitializationSegment' \
  --data-binary -d '@Streams(default-video.cmfv)/InitializationSegment'
        
# Media
$ awscurl --region <region> --service elemental-inference -X PUT \
 'https://<data-endpoint>/v1/feed/<feed-id>/input/0/media/Streams(default-audio.cmfa)/Segment(<sequence>)' \
 --data-binary -d '@Streams(default-audio.cmfa)/Segment(<sequence>)'

$ awscurl --region <region> --service elemental-inference -X PUT \
  'https://<data-endpoint>/v1/feed/<feed-id>/input/0/media/Streams(default-video.cmfv)/Segment(<sequence>)' \
  --data-binary -d '@Streams(default-video.cmfv)/Segment(<sequence>)'
```

# Step D: Query the metadata
Query the metadata

Use the Elemental Inference GetEndpoint operation to obtain the metadata that Elemental Inference generates.

For example, the following CURL code shows how to use the POST command to query for the metadata for the output named `testOutput`. The query is for the first second of metadata. This one second span is identified by the start PTS of 0 and the end PTS of 1001.

```
# Query the first second of metadata
$ awscurl --service "elemental-inference" --region <region> \
  -X POST 'https://<data-endpoint>/v1/feed/<feed-id>/input/0/metadata' \
  -H "Content-Type: application/json" \
  -d '{"outputName": "testOutput", "timeSpecification": { "ptsBased": 
//{ "startPts":0, "endPts": 1001, "timescale": 1000 } }, "parameters": {"smartCropping": 
//{"frameRate": { "numerator": 24, "denominator": 1}}}}'
```

For information about the metadata returned for each feature, see the following topics.

**Topics**
+ [

## Metadata for smart crop
](#query-metadata-smart-crop)

## Metadata for smart crop


The following CURL code shows the query command plus the results when the output `testOutput` is a smart crop output.



```
# Query the first second of metadata
$ awscurl --service "elemental-inference" --region <region> \
  -X POST 'https://<data-endpoint>/v1/feed/<feed-id>/input/0/metadata' \
  -H "Content-Type: application/json" \
  -d '{"outputName": "testOutput", "timeSpecification": { "ptsBased": { "startPts":0, "endPts": 1001, "timescale": 1000 } }, "parameters": {"smartCropping": {"frameRate": { "numerator": 24, "denominator": 1}}}}'
{
    "items": [
        {
            "metadata": {
                "smartCropping": {
                    "crop": {
                        "centerPoint": {
                            "scale": 10000,
                            "xPosition": 2176,
                            "yPosition": 6250
                        }
                    }
                }
            },
            "pts": 0,
            "timecode": null
        },
        {
            "metadata": {
                "smartCropping": {
                    "crop": {
                        "centerPoint": {
                            "scale": 10000,
                            "xPosition": 2176,
                            "yPosition": 6250
                        }
                    }
                }
            },
            "pts": 41,
            "timecode": null
        },
        },
        {
            "metadata": {
                "smartCropping": {
                    "crop": {
                        "centerPoint": {
                            "scale": 10000,
                            "xPosition": 2208,
                            "yPosition": 6238
                        }
                    }
                }
            },
            "pts": 83,
            "timecode": null
        },
.
.
.
        {
            "metadata": {
                "smartCropping": {
                    "crop": {
                        "centerPoint": {
                            "scale": 10000,
                            "xPosition": 2873,
                            "yPosition": 5781
                        }
                    }
                }
            },
            "pts": 1000,
            "timecode": null
        }
    ]
}
```

### Using the metadata


For smart crop, Elemental Inference identifies a *region of interest* in each frame. Elemental Inference then generates metadata that identifies the centerpoint in that region. You can develop a solution that uses this metadata to crop and scale the video. The centerpoint provides you with a reference point for the cropping and scaling algorithms that you develop. 

The centerpoint is identified using three pieces of data:
+  scale is a reference for calculating the positions as a percentage. 
+  X position is the position of the centerpoint on the X-axis, from the top left corner of the video frame. Always a positive number. 
+ Y position is the position of the centerpoint on the Y-axis, from the top left corner of the video frame. Always a positive number.

You can use this data to calculate the centerpoint pixel position in output video of any resolution. The formulas for finding the centerpoint are:

(*X position*) x width of output video / *scale*

(*Y position*) x height of output video / *scale*

**Example 1**

For example, if the output video is 1920 x 1080, then the following applies to the first piece of data in the metadata example:
+ The X pixel position is 2176 x 1920 / 10000 = pixel 417.792 or 418 rounded up
+ The Y pixel position is 6250 x 1080 / 10000 = pixel 675 

**Example 2**

Or if the output video is 1280 x 720, then the following applies:
+ The X pixel position is 2176 x 1280 / 10000 = pixel 278.528 or 279 rounded up
+ The Y pixel position is 6250 x 720 / 10000= pixel 450