

# Searching faces in a collection in streaming video


**Note**  
Amazon Rekognition Streaming Video Analysis will no longer be open to new customers starting April 30, 2026. If you would like to use Streaming Video Analysis, sign up prior to that date. Existing customers for accounts that have used this feature within the last 12 months can continue to use the service as normal. For more information, see [Rekognition Streaming Video Analysis availability change](https://docs.aws.amazon.com/rekognition/latest/dg/rekognition-streaming-video-analysis-availability-change.html). 

You can use Amazon Rekognition Video to detect and recognize faces from a collection in streaming video. With Amazon Rekognition Video you can create a stream processor ([CreateStreamProcessor](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_CreateStreamProcessor.html)) to start and manage the analysis of streaming video. 

To detect a known face in a video stream (face search), Amazon Rekognition Video uses Amazon Kinesis Video Streams to receive and process a video stream. The analysis results are output from Amazon Rekognition Video to a Kinesis data stream and then read by your client application. 

To use Amazon Rekognition Video with streaming video, your application needs to implement the following:
+ A Kinesis video stream for sending streaming video to Amazon Rekognition Video. For more information, see the [Amazon Kinesis Video Streams Developer Guide](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/what-is-kinesis-video.html). 
+ An Amazon Rekognition Video stream processor to manage the analysis of the streaming video. For more information, see [Overview of Amazon Rekognition Video stream processor operations](streaming-video.md#using-rekognition-video-stream-processor).
+ A Kinesis data stream consumer to read the analysis results that Amazon Rekognition Video sends to the Kinesis data stream. For more information, see [Kinesis Data Streams Consumers](https://docs.aws.amazon.com/streams/latest/dev/amazon-kinesis-consumers.html). 

This section contains information about writing an application that creates the Kinesis video stream and other necessary resources, streams video into Amazon Rekognition Video, and receives the analysis results.

**Topics**
+ [

# Setting up your Amazon Rekognition Video and Amazon Kinesis resources
](setting-up-your-amazon-rekognition-streaming-video-resources.md)
+ [

# Searching faces in a streaming video
](rekognition-video-stream-processor-search-faces.md)
+ [

# Streaming using a GStreamer plugin
](streaming-using-gstreamer-plugin.md)
+ [

# Troubleshooting streaming video
](streaming-video-troubleshooting.md)

# Setting up your Amazon Rekognition Video and Amazon Kinesis resources


**Note**  
Amazon Rekognition Streaming Video Analysis will no longer be open to new customers starting April 30, 2026. If you would like to use Streaming Video Analysis, sign up prior to that date. Existing customers for accounts that have used this feature within the last 12 months can continue to use the service as normal. For more information, see [Rekognition Streaming Video Analysis availability change](https://docs.aws.amazon.com/rekognition/latest/dg/rekognition-streaming-video-analysis-availability-change.html). 

 The following procedures describe the steps you take to provision the Kinesis video stream and other resources that are used to recognize faces in a streaming video.

## Prerequisites


To run this procedure, you need to have the AWS SDK for Java installed. For more information, see [Getting started with Amazon Rekognition](getting-started.md). The AWS account you use must have access permissions to the Amazon Rekognition API. For more information, see [Actions Defined by Amazon Rekognition](https://docs.aws.amazon.com/IAM/latest/UserGuide/list_amazonrekognition.html#amazonrekognition-actions-as-permissions) in the *IAM User Guide*. 

**To recognize faces in a video stream (AWS SDK)**

1. If you haven't already, create an IAM service role to give Amazon Rekognition Video access to your Kinesis video streams and your Kinesis data streams. Note the ARN. For more information, see [Giving access to streams using AmazonRekognitionServiceRole](api-streaming-video-roles.md#api-streaming-video-roles-all-stream).

1. [Create a collection](create-collection-procedure.md) and note the collection identifier you used.

1. [Index the faces](add-faces-to-collection-procedure.md) you want to search for into the collection you created in step 2.

1. [Create a Kinesis video stream](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/gs-createstream.html) and note the stream's Amazon Resource Name (ARN).

1. [Create a Kinesis data stream](https://docs.aws.amazon.com/streams/latest/dev/learning-kinesis-module-one-create-stream.html). Prepend the stream name with *AmazonRekognition* and note the stream's ARN.

You can then [create the face search stream processor](rekognition-video-stream-processor-search-faces.md#streaming-video-creating-stream-processor) and [start the stream processor](rekognition-video-stream-processor-search-faces.md#streaming-video-starting-stream-processor) using the stream processor name that you chose.

**Note**  
 You should start the stream processor only after you have verified you can ingest media into the Kinesis video stream. 

## Streaming video into Amazon Rekognition Video


To stream video into Amazon Rekognition Video, you use the Amazon Kinesis Video Streams SDK to create and use a Kinesis video stream. The `PutMedia` operation writes video data *fragments* into a Kinesis video stream that Amazon Rekognition Video consumes. Each video data fragment is typically 2–10 seconds in length and contains a self-contained sequence of video frames. Amazon Rekognition Video supports H.264 encoded videos, which can have three types of frames (I, B, and P). For more information, see [Inter Frame](https://en.wikipedia.org/wiki/Inter_frame). The first frame in the fragment must be an I-frame. An I-frame can be decoded independent of any other frame. 

As video data arrives into the Kinesis video stream, Kinesis Video Streams assigns a unique number to the fragment. For an example, see [PutMedia API Example](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/examples-putmedia.html).
+  If you are streaming from an Matroska (MKV) encoded source, use the [PutMedia](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/API_dataplane_PutMedia.html) operation to stream the source video into the Kinesis video stream that you created. For more information, see [PutMedia API Example](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/examples-putmedia.html). 
+  If you are streaming from a device camera, see [Streaming using a GStreamer plugin](streaming-using-gstreamer-plugin.md).

# Giving Amazon Rekognition Video access to your resources
Giving Amazon Rekognition Video access to resources

**Note**  
Amazon Rekognition Streaming Video Analysis will no longer be open to new customers starting April 30, 2026. If you would like to use Streaming Video Analysis, sign up prior to that date. Existing customers for accounts that have used this feature within the last 12 months can continue to use the service as normal. For more information, see [Rekognition Streaming Video Analysis availability change](https://docs.aws.amazon.com/rekognition/latest/dg/rekognition-streaming-video-analysis-availability-change.html). 

You use an AWS Identity and Access Management (IAM) service role to give Amazon Rekognition Video read access to Kinesis video streams. If you are using a face search stream processor, you use an IAM service role to give Amazon Rekognition Video write access to Kinesis data streams. If you are using a security monitoring stream processor, you use IAM roles to give Amazon Rekognition Video access to your Amazon S3 bucket and to an Amazon SNS topic.

## Giving access for face search stream processors


You can create a permissions policy that allows Amazon Rekognition Video access to individual Kinesis video streams and Kinesis data streams.

**To give Amazon Rekognition Video access for a face search stream processor**

1. [ Create a new permissions policy with the IAM JSON policy editor](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html#access_policies_create-json-editor), and use the following policy. Replace `video-arn` with the ARN of the desired Kinesis video stream. If you are using a face search stream processor, replace `data-arn` with the ARN of the desired Kinesis data stream.

1. [Create an IAM service role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html?icmpid=docs_iam_console), or update an existing IAM service role. Use the following information to create the IAM service role:

   1. Choose **Rekognition** for the service name.

   1. Choose **Rekognition** for the service role use case.

   1. Attach the permissions policy that you created in step 1.

1. Note the ARN of the service role. You need it to start video analysis operations.

## Giving access to streams using AmazonRekognitionServiceRole


 As an alternative option for setting up access to Kinesis video streams and data streams, you can use the `AmazonRekognitionServiceRole` permissions policy. IAM provides the *Rekognition* service role use case that, when used with the `AmazonRekognitionServiceRole` permissions policy, can write to multiple Kinesis data streams and read from all your Kinesis video streams. To give Amazon Rekognition Video write access to multiple Kinesis data streams, you can prepend the names of the Kinesis data streams with *AmazonRekognition*—for example, `AmazonRekognitionMyDataStreamName`. 

**To give Amazon Rekognition Video access to your Kinesis video stream and Kinesis data stream**

1. [Create an IAM service role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html?icmpid=docs_iam_console). Use the following information to create the IAM service role:

   1. Choose **Rekognition** for the service name.

   1. Choose **Rekognition** for the service role use case.

   1. Choose the **AmazonRekognitionServiceRole** permissions policy, which gives Amazon Rekognition Video write access to Kinesis data streams that are prefixed with *AmazonRekognition* and read access to all your Kinesis video streams.

1. To ensure your AWS account is secure, limit the scope of Rekognition's access to just the resources you are using. This can be done by attaching a trust policy to your IAM service role. For information on how to do this, see [Cross-service confused deputy prevention](cross-service-confused-deputy-prevention.md).

1. Note the Amazon Resource Name (ARN) of the service role. You need it to start video analysis operations.

# Searching faces in a streaming video


**Note**  
Amazon Rekognition Streaming Video Analysis will no longer be open to new customers starting April 30, 2026. If you would like to use Streaming Video Analysis, sign up prior to that date. Existing customers for accounts that have used this feature within the last 12 months can continue to use the service as normal. For more information, see [Rekognition Streaming Video Analysis availability change](https://docs.aws.amazon.com/rekognition/latest/dg/rekognition-streaming-video-analysis-availability-change.html). 

Amazon Rekognition Video can search faces in a collection that match faces that are detected in a streaming video. For more information about collections, see [Searching faces in a collection](collections.md).

**Topics**
+ [

## Creating the Amazon Rekognition Video face search stream processor
](#streaming-video-creating-stream-processor)
+ [

## Starting the Amazon Rekognition Video face search stream processor
](#streaming-video-starting-stream-processor)
+ [

## Using stream processors for face searching (Java V2 example)
](#using-stream-processors-v2)
+ [

## Using stream processors for face searching (Java V1 example)
](#using-stream-processors)
+ [

# Reading streaming video analysis results
](streaming-video-kinesis-output.md)
+ [

# Displaying Rekognition results with Kinesis Video Streams locally
](displaying-rekognition-results-locally.md)
+ [

# Understanding the Kinesis face recognition JSON frame record
](streaming-video-kinesis-output-reference.md)

The following diagram shows how Amazon Rekognition Video detects and recognizes faces in a streaming video.

![\[Diagram of workflow for using Amazon Rekognition Video to process video streams from Amazon Kinesis.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/VideoRekognitionStream.png)


## Creating the Amazon Rekognition Video face search stream processor


Before you can analyze a streaming video, you create an Amazon Rekognition Video stream processor ([CreateStreamProcessor](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_CreateStreamProcessor.html)). The stream processor contains information about the Kinesis data stream and the Kinesis video stream. It also contains the identifier for the collection that contains the faces you want to recognize in the input streaming video. You also specify a name for the stream processor. The following is a JSON example for the `CreateStreamProcessor` request.

```
{
       "Name": "streamProcessorForCam",
       "Input": {
              "KinesisVideoStream": {
                     "Arn": "arn:aws:kinesisvideo:us-east-1:nnnnnnnnnnnn:stream/inputVideo"
              }
       },
       "Output": {
              "KinesisDataStream": {
                     "Arn": "arn:aws:kinesis:us-east-1:nnnnnnnnnnnn:stream/outputData"
              }
       },
       "RoleArn": "arn:aws:iam::nnnnnnnnnnn:role/roleWithKinesisPermission",
       "Settings": {
              "FaceSearch": {
                     "CollectionId": "collection-with-100-faces",
                     "FaceMatchThreshold": 85.5
              }
       }
}
```

The following is an example response from `CreateStreamProcessor`.

```
{
       “StreamProcessorArn”: “arn:aws:rekognition:us-east-1:nnnnnnnnnnnn:streamprocessor/streamProcessorForCam”
}
```

## Starting the Amazon Rekognition Video face search stream processor


You start analyzing streaming video by calling [StartStreamProcessor](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_StartStreamProcessor.html) with the stream processor name that you specified in `CreateStreamProcessor`. The following is a JSON example for the `StartStreamProcessor` request.

```
{
       "Name": "streamProcessorForCam"
}
```

If the stream processor successfully starts, an HTTP 200 response is returned, along with an empty JSON body.

## Using stream processors for face searching (Java V2 example)


The following example code shows how to call various stream processor operations, such as [CreateStreamProcessor](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_CreateStreamProcessor.html) and [StartStreamProcessor](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_StartStreamProcessor.html), using the AWS SDK for Java version 2.

This code is taken from the AWS Documentation SDK examples GitHub repository. See the full example [here](https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/rekognition/src/main/java/com/example/rekognition/CreateStreamProcessor.java).

```
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.rekognition.RekognitionClient;
import software.amazon.awssdk.services.rekognition.model.CreateStreamProcessorRequest;
import software.amazon.awssdk.services.rekognition.model.CreateStreamProcessorResponse;
import software.amazon.awssdk.services.rekognition.model.FaceSearchSettings;
import software.amazon.awssdk.services.rekognition.model.KinesisDataStream;
import software.amazon.awssdk.services.rekognition.model.KinesisVideoStream;
import software.amazon.awssdk.services.rekognition.model.ListStreamProcessorsRequest;
import software.amazon.awssdk.services.rekognition.model.ListStreamProcessorsResponse;
import software.amazon.awssdk.services.rekognition.model.RekognitionException;
import software.amazon.awssdk.services.rekognition.model.StreamProcessor;
import software.amazon.awssdk.services.rekognition.model.StreamProcessorInput;
import software.amazon.awssdk.services.rekognition.model.StreamProcessorSettings;
import software.amazon.awssdk.services.rekognition.model.StreamProcessorOutput;
import software.amazon.awssdk.services.rekognition.model.StartStreamProcessorRequest;
import software.amazon.awssdk.services.rekognition.model.DescribeStreamProcessorRequest;
import software.amazon.awssdk.services.rekognition.model.DescribeStreamProcessorResponse;

/**
 * Before running this Java V2 code example, set up your development
 * environment, including your credentials.
 * <p>
 * For more information, see the following documentation topic:
 * <p>
 * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
 */
public class CreateStreamProcessor {
    public static void main(String[] args) {
        final String usage = """
                
                Usage:    <role> <kinInputStream> <kinOutputStream> <collectionName> <StreamProcessorName>
                
                Where:
                   role - The ARN of the AWS Identity and Access Management (IAM) role to use. \s
                   kinInputStream - The ARN of the Kinesis video stream.\s
                   kinOutputStream - The ARN of the Kinesis data stream.\s
                   collectionName - The name of the collection to use that contains content. \s
                   StreamProcessorName - The name of the Stream Processor. \s
                """;

        if (args.length != 5) {
            System.out.println(usage);
            System.exit(1);
        }

        String role = args[0];
        String kinInputStream = args[1];
        String kinOutputStream = args[2];
        String collectionName = args[3];
        String streamProcessorName = args[4];

        Region region = Region.US_EAST_1;
        RekognitionClient rekClient = RekognitionClient.builder()
                .region(region)
                .build();

        processCollection(rekClient, streamProcessorName, kinInputStream, kinOutputStream, collectionName,
                role);
        startSpecificStreamProcessor(rekClient, streamProcessorName);
        listStreamProcessors(rekClient);
        describeStreamProcessor(rekClient, streamProcessorName);
        deleteSpecificStreamProcessor(rekClient, streamProcessorName);
    }

    public static void listStreamProcessors(RekognitionClient rekClient) {
        ListStreamProcessorsRequest request = ListStreamProcessorsRequest.builder()
                .maxResults(15)
                .build();

        ListStreamProcessorsResponse listStreamProcessorsResult = rekClient.listStreamProcessors(request);
        for (StreamProcessor streamProcessor : listStreamProcessorsResult.streamProcessors()) {
            System.out.println("StreamProcessor name - " + streamProcessor.name());
            System.out.println("Status - " + streamProcessor.status());
        }
    }

    private static void describeStreamProcessor(RekognitionClient rekClient, String StreamProcessorName) {
        DescribeStreamProcessorRequest streamProcessorRequest = DescribeStreamProcessorRequest.builder()
                .name(StreamProcessorName)
                .build();

        DescribeStreamProcessorResponse describeStreamProcessorResult = rekClient
                .describeStreamProcessor(streamProcessorRequest);
        System.out.println("Arn - " + describeStreamProcessorResult.streamProcessorArn());
        System.out.println("Input kinesisVideo stream - "
                + describeStreamProcessorResult.input().kinesisVideoStream().arn());
        System.out.println("Output kinesisData stream - "
                + describeStreamProcessorResult.output().kinesisDataStream().arn());
        System.out.println("RoleArn - " + describeStreamProcessorResult.roleArn());
        System.out.println(
                "CollectionId - "
                        + describeStreamProcessorResult.settings().faceSearch().collectionId());
        System.out.println("Status - " + describeStreamProcessorResult.status());
        System.out.println("Status message - " + describeStreamProcessorResult.statusMessage());
        System.out.println("Creation timestamp - " + describeStreamProcessorResult.creationTimestamp());
        System.out.println("Last update timestamp - " + describeStreamProcessorResult.lastUpdateTimestamp());
    }

    private static void startSpecificStreamProcessor(RekognitionClient rekClient, String StreamProcessorName) {
        try {
            StartStreamProcessorRequest streamProcessorRequest = StartStreamProcessorRequest.builder()
                    .name(StreamProcessorName)
                    .build();

            rekClient.startStreamProcessor(streamProcessorRequest);
            System.out.println("Stream Processor " + StreamProcessorName + " started.");

        } catch (RekognitionException e) {
            System.out.println(e.getMessage());
            System.exit(1);
        }
    }

    private static void processCollection(RekognitionClient rekClient, String StreamProcessorName,
                                          String kinInputStream, String kinOutputStream, String collectionName, String role) {
        try {
            KinesisVideoStream videoStream = KinesisVideoStream.builder()
                    .arn(kinInputStream)
                    .build();

            KinesisDataStream dataStream = KinesisDataStream.builder()
                    .arn(kinOutputStream)
                    .build();

            StreamProcessorOutput processorOutput = StreamProcessorOutput.builder()
                    .kinesisDataStream(dataStream)
                    .build();

            StreamProcessorInput processorInput = StreamProcessorInput.builder()
                    .kinesisVideoStream(videoStream)
                    .build();

            FaceSearchSettings searchSettings = FaceSearchSettings.builder()
                    .faceMatchThreshold(75f)
                    .collectionId(collectionName)
                    .build();

            StreamProcessorSettings processorSettings = StreamProcessorSettings.builder()
                    .faceSearch(searchSettings)
                    .build();

            CreateStreamProcessorRequest processorRequest = CreateStreamProcessorRequest.builder()
                    .name(StreamProcessorName)
                    .input(processorInput)
                    .output(processorOutput)
                    .roleArn(role)
                    .settings(processorSettings)
                    .build();

            CreateStreamProcessorResponse response = rekClient.createStreamProcessor(processorRequest);
            System.out.println("The ARN for the newly create stream processor is "
                    + response.streamProcessorArn());

        } catch (RekognitionException e) {
            System.out.println(e.getMessage());
            System.exit(1);
        }
    }

    private static void deleteSpecificStreamProcessor(RekognitionClient rekClient, String StreamProcessorName) {
        rekClient.stopStreamProcessor(a -> a.name(StreamProcessorName));
        rekClient.deleteStreamProcessor(a -> a.name(StreamProcessorName));
        System.out.println("Stream Processor " + StreamProcessorName + " deleted.");
    }
}
```

## Using stream processors for face searching (Java V1 example)


The following example code shows how to call various stream processor operations, such as [CreateStreamProcessor](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_CreateStreamProcessor.html) and [StartStreamProcessor](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_StartStreamProcessor.html), using Java V1. The example includes a stream processor manager class (StreamManager) that provides methods to call stream processor operations. The starter class (Starter) creates a StreamManager object and calls various operations. 

**To configure the example:**

1. Set the values of the Starter class member fields to your desired values.

1. In the Starter class function `main`, uncomment the desired function call.

### Starter class


```
//Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
//PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)

// Starter class. Use to create a StreamManager class
// and call stream processor operations.
package com.amazonaws.samples;
import com.amazonaws.samples.*;

public class Starter {

	public static void main(String[] args) {
		
		
    	String streamProcessorName="Stream Processor Name";
    	String kinesisVideoStreamArn="Kinesis Video Stream Arn";
    	String kinesisDataStreamArn="Kinesis Data Stream Arn";
    	String roleArn="Role Arn";
    	String collectionId="Collection ID";
    	Float matchThreshold=50F;

		try {
			StreamManager sm= new StreamManager(streamProcessorName,
					kinesisVideoStreamArn,
					kinesisDataStreamArn,
					roleArn,
					collectionId,
					matchThreshold);
			//sm.createStreamProcessor();
			//sm.startStreamProcessor();
			//sm.deleteStreamProcessor();
			//sm.deleteStreamProcessor();
			//sm.stopStreamProcessor();
			//sm.listStreamProcessors();
			//sm.describeStreamProcessor();
		}
		catch(Exception e){
			System.out.println(e.getMessage());
		}
	}
}
```

### StreamManager class


```
//Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
//PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)

// Stream manager class. Provides methods for calling
// Stream Processor operations.
package com.amazonaws.samples;

import com.amazonaws.services.rekognition.AmazonRekognition;
import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;
import com.amazonaws.services.rekognition.model.CreateStreamProcessorRequest;
import com.amazonaws.services.rekognition.model.CreateStreamProcessorResult;
import com.amazonaws.services.rekognition.model.DeleteStreamProcessorRequest;
import com.amazonaws.services.rekognition.model.DeleteStreamProcessorResult;
import com.amazonaws.services.rekognition.model.DescribeStreamProcessorRequest;
import com.amazonaws.services.rekognition.model.DescribeStreamProcessorResult;
import com.amazonaws.services.rekognition.model.FaceSearchSettings;
import com.amazonaws.services.rekognition.model.KinesisDataStream;
import com.amazonaws.services.rekognition.model.KinesisVideoStream;
import com.amazonaws.services.rekognition.model.ListStreamProcessorsRequest;
import com.amazonaws.services.rekognition.model.ListStreamProcessorsResult;
import com.amazonaws.services.rekognition.model.StartStreamProcessorRequest;
import com.amazonaws.services.rekognition.model.StartStreamProcessorResult;
import com.amazonaws.services.rekognition.model.StopStreamProcessorRequest;
import com.amazonaws.services.rekognition.model.StopStreamProcessorResult;
import com.amazonaws.services.rekognition.model.StreamProcessor;
import com.amazonaws.services.rekognition.model.StreamProcessorInput;
import com.amazonaws.services.rekognition.model.StreamProcessorOutput;
import com.amazonaws.services.rekognition.model.StreamProcessorSettings;

public class StreamManager {

    private String streamProcessorName;
    private String kinesisVideoStreamArn;
    private String kinesisDataStreamArn;
    private String roleArn;
    private String collectionId;
    private float matchThreshold;

    private AmazonRekognition rekognitionClient;
    

    public StreamManager(String spName,
    		String kvStreamArn,
    		String kdStreamArn,
    		String iamRoleArn,
    		String collId,
    		Float threshold){
    	streamProcessorName=spName;
    	kinesisVideoStreamArn=kvStreamArn;
    	kinesisDataStreamArn=kdStreamArn;
    	roleArn=iamRoleArn;
    	collectionId=collId;
    	matchThreshold=threshold;
    	rekognitionClient=AmazonRekognitionClientBuilder.defaultClient();
    	
    }
    
    public void createStreamProcessor() {
    	//Setup input parameters
        KinesisVideoStream kinesisVideoStream = new KinesisVideoStream().withArn(kinesisVideoStreamArn);
        StreamProcessorInput streamProcessorInput =
                new StreamProcessorInput().withKinesisVideoStream(kinesisVideoStream);
        KinesisDataStream kinesisDataStream = new KinesisDataStream().withArn(kinesisDataStreamArn);
        StreamProcessorOutput streamProcessorOutput =
                new StreamProcessorOutput().withKinesisDataStream(kinesisDataStream);
        FaceSearchSettings faceSearchSettings =
                new FaceSearchSettings().withCollectionId(collectionId).withFaceMatchThreshold(matchThreshold);
        StreamProcessorSettings streamProcessorSettings =
                new StreamProcessorSettings().withFaceSearch(faceSearchSettings);

        //Create the stream processor
        CreateStreamProcessorResult createStreamProcessorResult = rekognitionClient.createStreamProcessor(
                new CreateStreamProcessorRequest().withInput(streamProcessorInput).withOutput(streamProcessorOutput)
                        .withSettings(streamProcessorSettings).withRoleArn(roleArn).withName(streamProcessorName));

        //Display result
        System.out.println("Stream Processor " + streamProcessorName + " created.");
        System.out.println("StreamProcessorArn - " + createStreamProcessorResult.getStreamProcessorArn());
    }

    public void startStreamProcessor() {
        StartStreamProcessorResult startStreamProcessorResult =
                rekognitionClient.startStreamProcessor(new StartStreamProcessorRequest().withName(streamProcessorName));
        System.out.println("Stream Processor " + streamProcessorName + " started.");
    }

    public void stopStreamProcessor() {
        StopStreamProcessorResult stopStreamProcessorResult =
                rekognitionClient.stopStreamProcessor(new StopStreamProcessorRequest().withName(streamProcessorName));
        System.out.println("Stream Processor " + streamProcessorName + " stopped.");
    }

    public void deleteStreamProcessor() {
        DeleteStreamProcessorResult deleteStreamProcessorResult = rekognitionClient
                .deleteStreamProcessor(new DeleteStreamProcessorRequest().withName(streamProcessorName));
        System.out.println("Stream Processor " + streamProcessorName + " deleted.");
    }

    public void describeStreamProcessor() {
        DescribeStreamProcessorResult describeStreamProcessorResult = rekognitionClient
                .describeStreamProcessor(new DescribeStreamProcessorRequest().withName(streamProcessorName));

        //Display various stream processor attributes.
        System.out.println("Arn - " + describeStreamProcessorResult.getStreamProcessorArn());
        System.out.println("Input kinesisVideo stream - "
                + describeStreamProcessorResult.getInput().getKinesisVideoStream().getArn());
        System.out.println("Output kinesisData stream - "
                + describeStreamProcessorResult.getOutput().getKinesisDataStream().getArn());
        System.out.println("RoleArn - " + describeStreamProcessorResult.getRoleArn());
        System.out.println(
                "CollectionId - " + describeStreamProcessorResult.getSettings().getFaceSearch().getCollectionId());
        System.out.println("Status - " + describeStreamProcessorResult.getStatus());
        System.out.println("Status message - " + describeStreamProcessorResult.getStatusMessage());
        System.out.println("Creation timestamp - " + describeStreamProcessorResult.getCreationTimestamp());
        System.out.println("Last update timestamp - " + describeStreamProcessorResult.getLastUpdateTimestamp());
    }

    public void listStreamProcessors() {
        ListStreamProcessorsResult listStreamProcessorsResult =
                rekognitionClient.listStreamProcessors(new ListStreamProcessorsRequest().withMaxResults(100));

        //List all stream processors (and state) returned from Rekognition
        for (StreamProcessor streamProcessor : listStreamProcessorsResult.getStreamProcessors()) {
            System.out.println("StreamProcessor name - " + streamProcessor.getName());
            System.out.println("Status - " + streamProcessor.getStatus());
        }
    }
}
```

# Reading streaming video analysis results
Reading analysis results

**Note**  
Amazon Rekognition Streaming Video Analysis will no longer be open to new customers starting April 30, 2026. If you would like to use Streaming Video Analysis, sign up prior to that date. Existing customers for accounts that have used this feature within the last 12 months can continue to use the service as normal. For more information, see [Rekognition Streaming Video Analysis availability change](https://docs.aws.amazon.com/rekognition/latest/dg/rekognition-streaming-video-analysis-availability-change.html). 

You can use the Amazon Kinesis Data Streams Client Library to consume analysis results that are sent to the Amazon Kinesis Data Streams output stream. For more information, see [Reading Data from a Kinesis Data Stream](https://docs.aws.amazon.com/streams/latest/dev/building-consumers.html). Amazon Rekognition Video places a JSON frame record for each analyzed frame into the Kinesis output stream. Amazon Rekognition Video doesn't analyze every frame that's passed to it through the Kinesis video stream. 

A frame record that's sent to a Kinesis data stream contains information about which Kinesis video stream fragment the frame is in, where the frame is in the fragment, and faces that are recognized in the frame. It also includes status information for the stream processor. For more information, see [Understanding the Kinesis face recognition JSON frame record](streaming-video-kinesis-output-reference.md).

The Amazon Kinesis Video Streams Parser Library contains example tests that consume Amazon Rekognition Video results and integrates it with the original Kinesis video stream. For more information, see [Displaying Rekognition results with Kinesis Video Streams locally](displaying-rekognition-results-locally.md).

Amazon Rekognition Video streams Amazon Rekognition Video analysis information to the Kinesis data stream. The following is a JSON example for a single record. 

```
{
  "InputInformation": {
    "KinesisVideo": {
      "StreamArn": "arn:aws:kinesisvideo:us-west-2:nnnnnnnnnnnn:stream/stream-name",
      "FragmentNumber": "91343852333289682796718532614445757584843717598",
      "ServerTimestamp": 1510552593.455,
      "ProducerTimestamp": 1510552593.193,
      "FrameOffsetInSeconds": 2
    }
  },
  "StreamProcessorInformation": {
    "Status": "RUNNING"
  },
  "FaceSearchResponse": [
    {
      "DetectedFace": {
        "BoundingBox": {
          "Height": 0.075,
          "Width": 0.05625,
          "Left": 0.428125,
          "Top": 0.40833333
        },
        "Confidence": 99.975174,
        "Landmarks": [
          {
            "X": 0.4452057,
            "Y": 0.4395594,
            "Type": "eyeLeft"
          },
          {
            "X": 0.46340984,
            "Y": 0.43744427,
            "Type": "eyeRight"
          },
          {
            "X": 0.45960626,
            "Y": 0.4526856,
            "Type": "nose"
          },
          {
            "X": 0.44958648,
            "Y": 0.4696949,
            "Type": "mouthLeft"
          },
          {
            "X": 0.46409217,
            "Y": 0.46704912,
            "Type": "mouthRight"
          }
        ],
        "Pose": {
          "Pitch": 2.9691637,
          "Roll": -6.8904796,
          "Yaw": 23.84388
        },
        "Quality": {
          "Brightness": 40.592964,
          "Sharpness": 96.09616
        }
      },
      "MatchedFaces": [
        {
          "Similarity": 88.863960,
          "Face": {
            "BoundingBox": {
              "Height": 0.557692,
              "Width": 0.749838,
              "Left": 0.103426,
              "Top": 0.206731
            },
            "FaceId": "ed1b560f-d6af-5158-989a-ff586c931545",
            "Confidence": 99.999201,
            "ImageId": "70e09693-2114-57e1-807c-50b6d61fa4dc",
            "ExternalImageId": "matchedImage.jpeg"
          }
        }
      ]
    }
  ]
}
```

In the JSON example, note the following:
+ **InputInformation** – Information about the Kinesis video stream that's used to stream video into Amazon Rekognition Video. For more information, see [InputInformation](streaming-video-kinesis-output-reference.md#streaming-video-kinesis-output-reference-inputinformation).
+ **StreamProcessorInformation** – Status information for the Amazon Rekognition Video stream processor. The only possible value for the `Status` field is RUNNING. For more information, see [StreamProcessorInformation](streaming-video-kinesis-output-reference-streamprocessorinformation.md).
+ **FaceSearchResponse** – Contains information about faces in the streaming video that match faces in the input collection. [FaceSearchResponse](streaming-video-kinesis-output-reference.md#streaming-video-kinesis-output-reference-facesearchresponse) contains a [DetectedFace](streaming-video-kinesis-output-reference.md#streaming-video-kinesis-output-reference-detectedface) object, which is a face that was detected in the analyzed video frame. For each detected face, the array `MatchedFaces` contains an array of matching face objects ([MatchedFace](streaming-video-kinesis-output-reference.md#streaming-video-kinesis-output-reference-facematch)) found in the input collection, along with a similarity score. 

## Mapping the Kinesis video stream to the Kinesis data stream


You might want to map the Kinesis video stream frames to the analyzed frames that are sent to the Kinesis data stream. For example, during the display of a streaming video, you might want to display boxes around the faces of recognized people. The bounding box coordinates are sent as part of the Kinesis Face Recognition Record to the Kinesis data stream. To display the bounding box correctly, you need to map the time information that's sent with the Kinesis Face Recognition Record with the corresponding frames in the source Kinesis video stream.

The technique that you use to map the Kinesis video stream to the Kinesis data stream depends on if you're streaming live media (such as a live streaming video), or if you're streaming archived media (such as a stored video).

### Mapping when you're streaming live media


**To map a Kinesis video stream frame to a Kinesis data stream frame**

1. Set the input parameter `FragmentTimeCodeType` of the [PutMedia](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/API_dataplane_PutMedia.html) operation to `RELATIVE`. 

1. Call `PutMedia` to deliver live media into the Kinesis video stream.

1. When you receive a Kinesis Face Recognition Record from the Kinesis data stream, store the values of `ProducerTimestamp` and `FrameOffsetInSeconds` from the [KinesisVideo](streaming-video-kinesis-output-reference.md#streaming-video-kinesis-output-reference-kinesisvideostreams-kinesisvideo) field.

1. Calculate the time stamp that corresponds to the Kinesis video stream frame by adding the `ProducerTimestamp` and `FrameOffsetInSeconds` field values together. 

### Mapping when you're streaming archived media


**To map a Kinesis video stream frame to a Kinesis data stream frame**

1. Call [PutMedia](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/API_dataplane_PutMedia.html) to deliver archived media into the Kinesis video stream.

1. When you receive an `Acknowledgement` object from the `PutMedia` operation response, store the `FragmentNumber` field value from the [Payload](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/API_dataplane_PutMedia.html#API_dataplane_PutMedia_ResponseSyntax) field. `FragmentNumber` is the fragment number for the MKV cluster. 

1. When you receive a Kinesis Face Recognition Record from the Kinesis data stream, store the `FrameOffsetInSeconds` field value from the [KinesisVideo](streaming-video-kinesis-output-reference.md#streaming-video-kinesis-output-reference-kinesisvideostreams-kinesisvideo) field. 

1. Calculate the mapping by using the `FrameOffsetInSeconds` and `FragmentNumber` values that you stored in steps 2 and 3. `FrameOffsetInSeconds` is the offset into the fragment with the specific `FragmentNumber` that's sent to the Amazon Kinesis data stream. For more information about getting the video frames for a given fragment number, see [Amazon Kinesis Video Streams Archived Media](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/API_Operations_Amazon_Kinesis_Video_Streams_Archived_Media.html).

# Displaying Rekognition results with Kinesis Video Streams locally


**Note**  
Amazon Rekognition Streaming Video Analysis will no longer be open to new customers starting April 30, 2026. If you would like to use Streaming Video Analysis, sign up prior to that date. Existing customers for accounts that have used this feature within the last 12 months can continue to use the service as normal. For more information, see [Rekognition Streaming Video Analysis availability change](https://docs.aws.amazon.com/rekognition/latest/dg/rekognition-streaming-video-analysis-availability-change.html). 

 You can see the results of Amazon Rekognition Video displayed in your feed from Amazon Kinesis Video Streams using the Amazon Kinesis Video Streams Parser Library’s example tests provided at [KinesisVideo - Rekognition Examples](https://github.com/aws/amazon-kinesis-video-streams-parser-library#kinesisvideo---rekognition-examples). The `KinesisVideoRekognitionIntegrationExample` displays bounding boxes over detected faces and renders the video locally through JFrame. This process assumes you have successfully connected a media input from a device camera to a Kinesis video stream and started an Amazon Rekognition Stream Processor. For more information, see [Streaming using a GStreamer plugin](streaming-using-gstreamer-plugin.md). 

## Step 1: Installing Kinesis Video Streams Parser Library


 To create a directory and download the Github repository, run the following command: 

```
$ git clone https://github.com/aws/amazon-kinesis-video-streams-parser-library.git
```

 Navigate to the library directory and run the following Maven command to perform a clean installation: 

```
$ mvn clean install
```

## Step 2: Configuring the Kinesis Video Streams and Rekognition integration example test


 Open the `KinesisVideoRekognitionIntegrationExampleTest.java` file. Remove the `@Ignore` right after the class header. Populate the data fields with the information from your Amazon Kinesis and Amazon Rekognition resources. For more information, see [Setting up your Amazon Rekognition Video and Amazon Kinesis resources](setting-up-your-amazon-rekognition-streaming-video-resources.md). If you are streaming video to your Kinesis video stream, remove the `inputStream` parameter. 

 See the following code example: 

```
RekognitionInput rekognitionInput = RekognitionInput.builder()
  .kinesisVideoStreamArn("arn:aws:kinesisvideo:us-east-1:123456789012:stream/rekognition-test-video-stream")
  .kinesisDataStreamArn("arn:aws:kinesis:us-east-1:123456789012:stream/AmazonRekognition-rekognition-test-data-stream")
  .streamingProcessorName("rekognition-test-stream-processor")
  // Refer how to add face collection :
  // https://docs.aws.amazon.com/rekognition/latest/dg/add-faces-to-collection-procedure.html
  .faceCollectionId("rekognition-test-face-collection")
  .iamRoleArn("rekognition-test-IAM-role")
  .matchThreshold(0.95f)
  .build();                
            
KinesisVideoRekognitionIntegrationExample example = KinesisVideoRekognitionIntegrationExample.builder()
  .region(Regions.US_EAST_1)
  .kvsStreamName("rekognition-test-video-stream")
  .kdsStreamName("AmazonRekognition-rekognition-test-data-stream")
  .rekognitionInput(rekognitionInput)
  .credentialsProvider(new ProfileCredentialsProvider())
  // NOTE: Comment out or delete the inputStream parameter if you are streaming video, otherwise
  // the test will use a sample video. 
  //.inputStream(TestResourceUtil.getTestInputStream("bezos_vogels.mkv"))
  .build();
```

## Step 3: Running the Kinesis Video Streams and Rekognition integration example test


 Ensure that your Kinesis video stream is receiving media input if you are streaming to it and start analyzing your stream with an Amazon Rekognition Video Stream Processor running. For more information, see [Overview of Amazon Rekognition Video stream processor operations](streaming-video.md#using-rekognition-video-stream-processor). Run the `KinesisVideoRekognitionIntegrationExampleTest` class as a JUnit test. After a short delay, a new window opens with a video feed from your Kinesis video stream with bounding boxes drawn over detected faces. 

**Note**  
 The faces in the collection used in this example must have External Image Id (the file name) specified in this format in order for bounding box labels to display meaningful text: PersonName1-Trusted, PersonName2-Intruder, PersonName3-Neutral, etc. The labels can also be color-coded and are customizable in the FaceType.java file. 

# Understanding the Kinesis face recognition JSON frame record


**Note**  
Amazon Rekognition Streaming Video Analysis will no longer be open to new customers starting April 30, 2026. If you would like to use Streaming Video Analysis, sign up prior to that date. Existing customers for accounts that have used this feature within the last 12 months can continue to use the service as normal. For more information, see [Rekognition Streaming Video Analysis availability change](https://docs.aws.amazon.com/rekognition/latest/dg/rekognition-streaming-video-analysis-availability-change.html). 

Amazon Rekognition Video can recognize faces in a streaming video. For each analyzed frame, Amazon Rekognition Video outputs a JSON frame record to a Kinesis data stream. Amazon Rekognition Video doesn't analyze every frame that's passed to it through the Kinesis video stream. 

The JSON frame record contains information about the input and output stream, the status of the stream processor, and information about faces that are recognized in the analyzed frame. This section contains reference information for the JSON frame record.

The following is the JSON syntax for a Kinesis data stream record. For more information, see [Working with streaming video events](streaming-video.md).

**Note**  
The Amazon Rekognition Video API works by comparing the faces in your input stream to a collection of faces, and returning the closest found matches, along with a similarity score.

```
{
    "InputInformation": {
        "KinesisVideo": {
            "StreamArn": "string",
            "FragmentNumber": "string",
            "ProducerTimestamp": number,
            "ServerTimestamp": number,
            "FrameOffsetInSeconds": number
        }
    },
    "StreamProcessorInformation": {
        "Status": "RUNNING"
    },
    "FaceSearchResponse": [
        {
            "DetectedFace": {
                "BoundingBox": {
                    "Width": number,
                    "Top": number,
                    "Height": number,
                    "Left": number
                },
                "Confidence": number,
                "Landmarks": [
                    {
                        "Type": "string",
                        "X": number,
                        "Y": number
                    }
                ],
                "Pose": {
                    "Pitch": number,
                    "Roll": number,
                    "Yaw": number
                },
                "Quality": {
                    "Brightness": number,
                    "Sharpness": number
                }
            },
            "MatchedFaces": [
                {
                    "Similarity": number,
                    "Face": {
                        "BoundingBox": {
                            "Width": number,
                            "Top": number,
                            "Height": number,
                            "Left": number
                        },
                        "Confidence": number,
                        "ExternalImageId": "string",
                        "FaceId": "string",
                        "ImageId": "string"
                    }
                }
            ]
        }
    ]
}
```

## JSON record


The JSON record includes information about a frame that's processed by Amazon Rekognition Video. The record includes information about the streaming video, the status for the analyzed frame, and information about faces that are recognized in the frame.

**InputInformation**

Information about the Kinesis video stream that's used to stream video into Amazon Rekognition Video.

Type: [InputInformation](#streaming-video-kinesis-output-reference-inputinformation) object

**StreamProcessorInformation**

Information about the Amazon Rekognition Video stream processor. This includes status information for the current status of the stream processor.

Type: [StreamProcessorInformation](streaming-video-kinesis-output-reference-streamprocessorinformation.md) object 

**FaceSearchResponse**

Information about the faces detected in a streaming video frame and the matching faces found in the input collection.

Type: [FaceSearchResponse](#streaming-video-kinesis-output-reference-facesearchresponse) object array

## InputInformation


Information about a source video stream that's used by Amazon Rekognition Video. For more information, see [Working with streaming video events](streaming-video.md).

**KinesisVideo**

Type: [KinesisVideo](#streaming-video-kinesis-output-reference-kinesisvideostreams-kinesisvideo) object

## KinesisVideo


Information about the Kinesis video stream that streams the source video into Amazon Rekognition Video. For more information, see [Working with streaming video events](streaming-video.md).

**StreamArn**

The Amazon Resource Name (ARN) of the Kinesis video stream.

Type: String 

**FragmentNumber**

The fragment of streaming video that contains the frame that this record represents.

Type: String

**ProducerTimestamp**

The producer-side Unix time stamp of the fragment. For more information, see [PutMedia](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/API_dataplane_PutMedia.html).

Type: Number

**ServerTimestamp**

The server-side Unix time stamp of the fragment. For more information, see [PutMedia](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/API_dataplane_PutMedia.html).

Type: Number

**FrameOffsetInSeconds**

The offset of the frame (in seconds) inside the fragment.

Type: Number 

# StreamProcessorInformation


**Note**  
Amazon Rekognition Streaming Video Analysis will no longer be open to new customers starting April 30, 2026. If you would like to use Streaming Video Analysis, sign up prior to that date. Existing customers for accounts that have used this feature within the last 12 months can continue to use the service as normal. For more information, see [Rekognition Streaming Video Analysis availability change](https://docs.aws.amazon.com/rekognition/latest/dg/rekognition-streaming-video-analysis-availability-change.html). 

Status information about the stream processor.

**Status**

The current status of the stream processor. The one possible value is RUNNING.

Type: String

## FaceSearchResponse


Information about a face detected in a streaming video frame and the faces in a collection that match the detected face. You specify the collection in a call to [CreateStreamProcessor](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_CreateStreamProcessor.html). For more information, see [Working with streaming video events](streaming-video.md). 

**DetectedFace**

Face details for a face detected in an analyzed video frame.

Type: [DetectedFace](#streaming-video-kinesis-output-reference-detectedface) object

**MatchedFaces**

An array of face details for faces in a collection that matches the face detected in `DetectedFace`.

Type: [MatchedFace](#streaming-video-kinesis-output-reference-facematch) object array

## DetectedFace


Information about a face that's detected in a streaming video frame. Matching faces in the input collection are available in [MatchedFace](#streaming-video-kinesis-output-reference-facematch) object field.

**BoundingBox**

The bounding box coordinates for a face that's detected within an analyzed video frame. The BoundingBox object has the same properties as the BoundingBox object that's used for image analysis.

Type: [BoundingBox](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_BoundingBox.html) object 

**Confidence**

The confidence level (1-100) that Amazon Rekognition Video has that the detected face is actually a face. 1 is the lowest confidence, 100 is the highest.

Type: Number

**Landmarks**

An array of facial landmarks.

Type: [Landmark](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_Landmark.html) object array

**Pose**

Indicates the pose of the face as determined by its pitch, roll, and yaw.

Type: [Pose](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_Pose.html) object

**Quality**

Identifies face image brightness and sharpness. 

Type: [ImageQuality](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_ImageQuality.html) object

## MatchedFace


Information about a face that matches a face detected in an analyzed video frame.

**Face**

Face match information for a face in the input collection that matches the face in the [DetectedFace](#streaming-video-kinesis-output-reference-detectedface) object. 

Type: [Face](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_Face.html) object 

**Similarity**

The level of confidence (1-100) that the faces match. 1 is the lowest confidence, 100 is the highest.

Type: Number 

# Streaming using a GStreamer plugin


**Note**  
Amazon Rekognition Streaming Video Analysis will no longer be open to new customers starting April 30, 2026. If you would like to use Streaming Video Analysis, sign up prior to that date. Existing customers for accounts that have used this feature within the last 12 months can continue to use the service as normal. For more information, see [Rekognition Streaming Video Analysis availability change](https://docs.aws.amazon.com/rekognition/latest/dg/rekognition-streaming-video-analysis-availability-change.html). 

Amazon Rekognition Video can analyze a live streaming video from a device camera. To access media input from a device source, you need to install GStreamer. GStreamer is a third-party multimedia framework software that connects media sources and processing tools together in workflow pipelines. You also need to install the [Amazon Kinesis Video Streams Producer Plugin](https://github.com/awslabs/amazon-kinesis-video-streams-producer-sdk-cpp/) for Gstreamer. This process assumes that you have successfully set up your Amazon Rekognition Video and Amazon Kinesis resources. For more information, see [Setting up your Amazon Rekognition Video and Amazon Kinesis resources](setting-up-your-amazon-rekognition-streaming-video-resources.md).

## Step 1: Install Gstreamer


 Download and install Gstreamer, a third-party multi-media platform software. You can use a package management software like Homebrew ([Gstreamer on Homebrew](https://formulae.brew.sh/formula/gstreamer)) or get it directly from the [Freedesktop website](https://gstreamer.freedesktop.org/download/). 

 Verify the successful installation of Gstreamer by launching a video feed with a test source from your command line terminal. 

```
$ gst-launch-1.0 videotestsrc ! autovideosink
```

## Step 2: Install the Kinesis Video Streams Producer plugin


 In this section, you will download the [ Amazon Kinesis Video Streams Producer Library](https://github.com/awslabs/amazon-kinesis-video-streams-producer-sdk-cpp/) and install the Kinesis Video Streams Gstreamer plugin. 

 Create a directory and clone the source code from the Github repository. Be sure to include the `--recursive` parameter. 

```
$ git clone --recursive https://github.com/awslabs/amazon-kinesis-video-streams-producer-sdk-cpp.git
```

Follow the [instructions provided by the library](https://github.com/awslabs/amazon-kinesis-video-streams-producer-sdk-cpp/blob/master/README.md) to configure and build the project. Make sure you use the platform-specific commands for your operating system. Use the `-DBUILD_GSTREAMER_PLUGIN=ON` parameter when you run `cmake` to install the Kinesis Video Streams Gstreamer plugin. This project requires the following additional packages that are included in the installation: GCC or Clang, Curl, Openssl and Log4cplus. If your build fails because of a missing package, verify that the package is installed and in your PATH. If you encounter a "can’t run C compiled program" error while building, run the build command again. Sometimes, the correct C compiler is not found. 

 Verify the installation of the Kinesis Video Streams plugin by running the following command. 

```
$ gst-inspect-1.0 kvssink
```

 The following information, such as factory and plugin details, should appear: 

```
Factory Details:
  Rank                     primary + 10 (266)
  Long-name                KVS Sink
  Klass                    Sink/Video/Network
  Description              GStreamer AWS KVS plugin
  Author                   AWS KVS <kinesis-video-support@amazon.com>
                
Plugin Details:
  Name                     kvssink
  Description              GStreamer AWS KVS plugin
  Filename                 /Users/YOUR_USER/amazon-kinesis-video-streams-producer-sdk-cpp/build/libgstkvssink.so
  Version                  1.0
  License                  Proprietary
  Source module            kvssinkpackage
  Binary package           GStreamer
  Origin URL               http://gstreamer.net/
  
  ...
```

## Step 3: Run Gstreamer with the Kinesis Video Streams plugin


 Before you begin streaming from a device camera to Kinesis Video Streams, you might need to convert the media source to an acceptable codec for Kinesis Video Streams. To determine the specifications and format capabilities of devices currently connected to your machine, run the following command.

```
$ gst-device-monitor-1.0
```

 To begin streaming, launch Gstreamer with the following sample command and add your credentials and Amazon Kinesis Video Streams information. You should use the access keys and region for the IAM service role you created while [giving Amazon Rekognition access to your Kinesis streams](https://docs.aws.amazon.com/rekognition/latest/dg/api-streaming-video-roles.html#api-streaming-video-roles-all-stream). For more information on access keys, see [Managing Access Keys for IAM Users](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) in the *IAM User Guide*. Also, you may adjust the video format argument parameters as required by your usage and available from your device. 

```
$ gst-launch-1.0 autovideosrc device=/dev/video0 ! videoconvert ! video/x-raw,format=I420,width=640,height=480,framerate=30/1 ! 
                x264enc bframes=0 key-int-max=45 bitrate=500 ! video/x-h264,stream-format=avc,alignment=au,profile=baseline ! 
                kvssink stream-name="YOUR_STREAM_NAME" storage-size=512 access-key="YOUR_ACCESS_KEY" secret-key="YOUR_SECRET_ACCESS_KEY" aws-region="YOUR_AWS_REGION"
```

 For more launch commands, see [Example GStreamer Launch Commands](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/examples-gstreamer-plugin.html#examples-gstreamer-plugin-launch). 

**Note**  
 If your launch command terminates with a non-negotiation error, check the output from the Device Monitor and make sure that the `videoconvert` parameter values are valid capabilities of your device. 

 You will see a video feed from your device camera on your Kinesis video stream after a few seconds. To begin detecting and matching faces with Amazon Rekognition, start your Amazon Rekognition Video stream processor. For more information, see [Overview of Amazon Rekognition Video stream processor operations](streaming-video.md#using-rekognition-video-stream-processor). 

# Troubleshooting streaming video


**Note**  
Amazon Rekognition Streaming Video Analysis will no longer be open to new customers starting April 30, 2026. If you would like to use Streaming Video Analysis, sign up prior to that date. Existing customers for accounts that have used this feature within the last 12 months can continue to use the service as normal. For more information, see [Rekognition Streaming Video Analysis availability change](https://docs.aws.amazon.com/rekognition/latest/dg/rekognition-streaming-video-analysis-availability-change.html). 

This topic provides troubleshooting information for using Amazon Rekognition Video with streaming videos.

**Topics**
+ [

## I don't know if my stream processor was successfully created
](#ts-streaming-video-create-sp)
+ [

## I don't know if I've configured my stream processor correctly
](#ts-configured-sp)
+ [

## My stream processor isn't returning results
](#ts-streaming-video-no-results-from-sp)
+ [

## The state of my stream processor is FAILED
](#ts-failed-state)
+ [

## My stream processor isn't returning the expected results
](#w2aac27c79c29c17)

## I don't know if my stream processor was successfully created


Use the following AWS CLI command to get a list of stream processors and their current status.

```
aws rekognition list-stream-processors
```

You can get additional details by using the following AWS CLI command. Replace `stream-processor-name` with the name of the required stream processor.

```
aws rekognition describe-stream-processor --name stream-processor-name
```

## I don't know if I've configured my stream processor correctly


If your code isn't outputting the analysis results from Amazon Rekognition Video, your stream processor might not be configured correctly. Do the following to confirm that your stream processor is configured correctly and able to produce results.

**To determine if your solution is configured correctly**

1. Run the following command to confirm that your stream processor is in the running state. Change `stream-processor-name` to the name of your stream processor. The stream processor is running if the value of `Status` is `RUNNING`. If the status is `RUNNING` and you aren't getting results, see [My stream processor isn't returning results](#ts-streaming-video-no-results-from-sp). If the status is `FAILED`, see [The state of my stream processor is FAILED](#ts-failed-state).

   ```
   aws rekognition describe-stream-processor --name stream-processor-name
   ```

1. If your stream processor is running, run the following Bash or PowerShell command to read data from the output Kinesis data stream. 

   **Bash**

   ```
   SHARD_ITERATOR=$(aws kinesis get-shard-iterator --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON --stream-name kinesis-data-stream-name --query 'ShardIterator')
                           aws kinesis get-records --shard-iterator $SHARD_ITERATOR
   ```

   **PowerShell**

   ```
   aws kinesis get-records --shard-iterator ((aws kinesis get-shard-iterator --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON --stream-name kinesis-data-stream-name).split('"')[4])
   ```

1. Use the [Decode tool](https://www.base64decode.org/) on the Base64 Decode website to decode the output into a human-readable string. For more information, see [Step 3: Get the Record](https://docs.aws.amazon.com/streams/latest/dev/fundamental-stream.html#get-records).

1. If the commands work and you see face detection results in the Kinesis data stream, then your solution is properly configured. If the command fails, check the other troubleshooting suggestions and see [Giving Amazon Rekognition Video access to your resources](api-streaming-video-roles.md).

Alternatively, you can use the "kinesis-process-record" AWS Lambda blueprint to log messages from the Kinesis data stream to CloudWatch for continuous visualization. This incurs additional costs for AWS Lambda and CloudWatch. 

## My stream processor isn't returning results


Your stream processor might not return results for several reasons. 

### Reason 1: Your stream processor isn't configured correctly


Your stream processor might not be configured correctly. For more information, see [I don't know if I've configured my stream processor correctly](#ts-configured-sp).

### Reason 2: Your stream processor isn't in the RUNNING state


**To troubleshoot the status of a stream processor**

1. Check the status of the stream processor with the following AWS CLI command.

   ```
   aws rekognition describe-stream-processor --name stream-processor-name
   ```

1. If the value of `Status` is `STOPPED`, start your stream processor with the following command:

   ```
   aws rekognition start-stream-processor --name stream-processor-name
   ```

1. If the value of `Status` is `FAILED`, see [The state of my stream processor is FAILED](#ts-failed-state).

1. If the value of `Status` is `STARTING`, wait for 2 minutes and check the status by repeating step 1. If the value of Status is still `STARTING`, do the following:

   1. Delete the stream processor with the following command.

      ```
      aws rekognition delete-stream-processor --name stream-processor-name
      ```

   1. Create a new stream processor with the same configuration. For more information, see [Working with streaming video events](streaming-video.md).

   1. If you're still having problems, contact AWS Support.

1. If the value of `Status` is `RUNNING`, see [Reason 3: There isn't active data in the Kinesis video stream](#ts-no-data).

### Reason 3: There isn't active data in the Kinesis video stream


**To check if there's active data in the Kinesis video stream**

1. Sign in to the AWS Management Console, and open the Amazon Kinesis Video Streams console at [https://console.aws.amazon.com/kinesisvideo/](https://console.aws.amazon.com/kinesisvideo/).

1. Select the Kinesis video stream that's the input for the Amazon Rekognition stream processor.

1. If the preview states **No data on stream**, then there's no data in the input stream for Amazon Rekognition Video to process.

For information about producing video with Kinesis Video Streams, see [Kinesis Video Streams Producer Libraries](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/producer-sdk.html). 

## The state of my stream processor is FAILED


You can check the state of a stream processor by using the following AWS CLI command.

```
aws rekognition describe-stream-processor --name stream-processor-name
```

If the value of Status is FAILED, check the troubleshooting information for the following error messages.

### Error: "Access denied to Role"


The IAM role that's used by the stream processor doesn't exist or Amazon Rekognition Video doesn't have permission to assume the role.

**To troubleshoot access to the IAM role**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. From the left navigation pane, choose **Roles**and confirm that the role exists. 

1. If the role exists, check that the role has the *AmazonRekognitionServiceRole* permissions policy.

1. If the role doesn't exist or doesn't have the right permissions, see [Giving Amazon Rekognition Video access to your resources](api-streaming-video-roles.md).

1. Start the stream processor with the following AWS CLI command.

   ```
   aws rekognition start-stream-processor --name stream-processor-name
   ```

### Error: "Access denied to Kinesis Video *or* Access denied to Kinesis Data"


The role doesn't have access to the Kinesis Video Streams API operations `GetMedia` and `GetDataEndpoint`. It also might not have access to the Kinesis Data Streams API operations `PutRecord` and `PutRecords`. 

**To troubleshoot API permissions**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. Open the role and make sure that it has the following permissions policy attached.

1. If any of the permissions are missing, update the policy. For more information, see [Giving Amazon Rekognition Video access to your resources](api-streaming-video-roles.md).

### Error: "Stream *input-video-stream-name* doesn't exist"


The Kinesis video stream input to the stream processor doesn't exist or isn't configured correctly. 

**To troubleshoot the Kinesis video stream**

1. Use the following command to confirm that the stream exists. 

   ```
   aws kinesisvideo list-streams
   ```

1. If the stream exists, check the following.
   + The Amazon Resource Name (ARN) is same as the ARN of the input stream for the stream processor.
   + The Kinesis video stream is in the same Region as the stream processor.

   If the stream processor isn't configured correctly, delete it with the following AWS CLI command.

   ```
   aws rekognition delete-stream-processor --name stream-processor-name
   ```

1. Create a new stream processor with the intended Kinesis video stream. For more information, see [Creating the Amazon Rekognition Video face search stream processor](rekognition-video-stream-processor-search-faces.md#streaming-video-creating-stream-processor).

### Error: "Collection not found"


The Amazon Rekognition collection that's used by the stream processor to match faces doesn't exist, or the wrong collection is being used.

**To confirm the collection**

1. Use the following AWS CLI command to determine if the required collection exists. Change `region` to the AWS Region in which you're running your stream processor.

   ```
   aws rekognition list-collections --region region
   ```

   If the required collection doesn't exist, create a new collection and add face information. For more information, see [Searching faces in a collection](collections.md).

1. In your call to [CreateStreamProcessor](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_CreateStreamProcessor.html), check that the value of the `CollectionId` input parameter is correct.

1. Start the stream processor with the following AWS CLI command.

   ```
   aws rekognition start-stream-processor --name stream-processor-name
   ```

### Error: "Stream *output-kinesis-data-stream-name* under account *account-id* not found"


The output Kinesis data stream that's used by the stream processor doesn't exist in your AWS account or isn't in the same AWS Region as your stream processor.

**To troubleshoot the Kinesis data stream**

1. Use the following AWS CLI command to determine if the Kinesis data stream exists. Change `region` to the AWS Region in which you're using your stream processor.

   ```
   aws kinesis list-streams --region region
   ```

1. If the Kinesis data stream exists, check that the Kinesis data stream name is same as the name of the output stream that's used by the stream processor.

1. If the Kinesis data stream doesn't exist, it might exist in another AWS Region. The Kinesis data stream must be in the same Region as the stream processor.

1. If necessary, create a new Kinesis data stream. 

   1. Create a Kinesis data stream with the same name as the name used by the stream processor. For more information, see [ Step 1: Create a Data Stream](https://docs.aws.amazon.com/streams/latest/dev/learning-kinesis-module-one-create-stream.html).

   1. Start the stream processor with the following AWS CLI command.

      ```
      aws rekognition start-stream-processor --name stream-processor-name
      ```

## My stream processor isn't returning the expected results


If your stream processor isn't returning the expected face matches, use the following information.
+ [Searching faces in a collection](collections.md)
+ [Recommendations for camera setup (streaming video)](recommendations-camera-streaming-video.md)