

# Detecting and analyzing faces
<a name="faces"></a>

Amazon Rekognition provides you with APIs you can use to detect and analyze faces in images and videos. This section provides an overview of the non-storage operations for facial analysis. These operations include functionalities like detecting facial landmarks, analyzing emotions, and comparing faces.

Amazon Rekognition can identify facial landmarks (e.g., eye position), detect emotions (e.g., happiness or sadness), and other attributes (e.g., glasses presence, face occlusion). When a face is detected, the system analyzes facial attributes and returns a confidence score for each attribute.

![\[Smiling woman wearing sunglasses and driving a vintage yellow car on an open road.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/sample-detect-faces.png)


This section contains examples for both image and video operations. 

For more information about using the Rekognition’s image operations, see [Working with images](images.md). 

For more information about using the Rekognition’s video operations, see [Working with stored video analysis operations](video.md).

Note that these operations are non-storage operations. You can use storage operations and Face collections to save facial metadata for faces detected in an image. Later you can search for stored faces in both images and videos. For example, this enables searching for a specific person in a video. For more information, see [Searching faces in a collection](collections.md).

For more information, see the Faces section of [Amazon Rekognition FAQs](https://aws.amazon.com/rekognition/faqs/).

**Note**  
The face detection models used by Amazon Rekognition Image and Amazon Rekognition Video don't support the detection of faces in cartoon/animated characters or non-human entities. If you want to detect cartoon characters in images or videos, we recommend using Amazon Rekognition Custom Labels. For more information, see the [Amazon Rekognition Custom Labels Developer Guide](https://docs.aws.amazon.com/rekognition/latest/customlabels-dg/what-is.html).

**Topics**
+ [Overview of face detection and face comparison](face-feature-differences.md)
+ [Guidelines on face attributes](guidance-face-attributes.md)
+ [Detecting faces in an image](faces-detect-images.md)
+ [Comparing faces in images](faces-comparefaces.md)
+ [Detecting faces in a stored video](faces-sqs-video.md)

# Overview of face detection and face comparison
<a name="face-feature-differences"></a>

Amazon Rekognition provides users access to two primary machine learning applications for images containing faces: face detection and face comparison. They empower crucial features like facial analysis and identity verification, making them vital for various applications from security to personal photo organization.

**Face Detection**

A face detection systems address the question: "Is there a face in this picture?" Key aspects of face detection include:
+ **Location and orientation**: Determines the presence, location, scale, and orientation of faces in images or video frames.
+ **Face attributes**: Detects faces regardless of attributes like gender, age, or facial hair.
+ **Additional Information**: Provides details on face occlusion and eye gaze direction.

**Face Comparison**

A face comparison systems focus on the question: "Does the face in one image match a face in another image?" Face comparison system functionalities include:
+ **Face matching predictions**: Compares a face in an image against a face in a provided database to predict matches.
+ **Face attribute handling**: Handles attributes to compares faces regardless of expression, facial hair, and age.

**Confidence scores and missed detections**

Both face detection and face comparison systems utilize confidence scores. A confidence score indicates the likelihood of predictions, such as the presence of a face or a match between faces. Higher scores indicate greater likelihood. For instance, 90% confidence suggests a higher probability of a correct detection or match than 60%.

If a face detection system doesn’t properly detect a face, or provides a low confidence prediction for an actual face, this is a missed detection/false negative. If the system incorrectly predicts the presence of a face at a high confidence level, this is a false alarm/false positive.

Similarly, a facial comparison system may not match two faces which belong to the same person (missed detection/false negative) or may incorrectly predict that two faces from different people are the same person (false alarm/false positive).

**Application design and threshold setting**
+ You can set a threshold that specifies the minimum confidence level required to return a result. Choosing appropriate confidence thresholds is essential for application design and decision-making based on the system outputs.
+ Your chosen confidence level should reflect your use case. Some examples of use cases and confidence thresholds:
+ 
  + **Photo Applications**: A lower threshold (e.g., 80%) might suffice for identifying family members in photos.
  + **High-Stakes Scenarios**: In use cases where the risk of missed detection or false alarm is higher, such as security applications, the system should use a higher confidence level. In such cases, a higher threshold (e.g., 99%) is recommended for accurate facial matches.

For more information on setting and understanding confidence thresholds, see [Searching faces in a collection](collections.md). 

# Guidelines on face attributes
<a name="guidance-face-attributes"></a>

Here are specifics regarding how Amazon Rekognition processes and returns face attributes.
+ **FaceDetail Object**: For each detected face, a FaceDetail object is returned. This FaceDetail contains data on face landmarks, quality, pose, and more.
+ **Attribute Predictions**: Attributes like emotion, gender, age, and others are predicted. A confidence level is assigned for each prediction, and the predictions are returned with the respective confidence score. A 99% confidence threshold is recommended for sensitive use cases. For age estimation, the midpoint of the predicted age range offers the best approximation.

Note that gender and emotion predictions are based on physical appearance and should not be used for determining actual gender identity or emotional state. A gender binary (male/female) prediction is based on the physical appearance of a face in a particular image. It doesn't indicate a person’s gender identity, and you shouldn't use Rekognition to make such a determination. We don't recommend using gender binary predictions to make decisions that impact an individual's rights, privacy, or access to services. Similarly, a prediction of an emotional doesn't indicate a person’s actual internal emotional state, and you shouldn't use Rekognition to make such a determination. A person pretending to have a happy face in a picture might look happy, but might not be experiencing happiness.

**Application and Use Cases**

Here are some practical applications and use cases for these attributes:
+ **Applications**: Attributes like Smile, Pose, and Sharpness can be utilized for selecting profile pictures or estimating demographics anonymously.
+ **Common Use Cases**: Social media applications and demographic estimation at events or retail stores are typical examples.

For more detailed information about each attribute, see [FaceDetail](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_FaceDetail.html).

# Detecting faces in an image
<a name="faces-detect-images"></a>

Amazon Rekognition Image provides the [DetectFaces](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DetectFaces.html) operation that looks for key facial features such as eyes, nose, and mouth to detect faces in an input image. Amazon Rekognition Image detects the 100 largest faces in an image.

You can provide the input image as an image byte array (base64-encoded image bytes), or specify an Amazon S3 object. In this procedure, you upload an image (JPEG or PNG) to your S3 bucket and specify the object key name.

**To detect faces in an image**

1. If you haven't already:

   1. Create or update a user with `AmazonRekognitionFullAccess` and `AmazonS3ReadOnlyAccess` permissions. For more information, see [Step 1: Set up an AWS account and create a User](setting-up.md#setting-up-iam).

   1. Install and configure the AWS CLI and the AWS SDKs. For more information, see [Step 2: Set up the AWS CLI and AWS SDKs](setup-awscli-sdk.md).

1. Upload an image (that contains one or more faces) to your S3 bucket. 

   For instructions, see [Uploading Objects into Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UploadingObjectsintoAmazonS3.html) in the *Amazon Simple Storage Service User Guide*.

1. Use the following examples to call `DetectFaces`.

------
#### [ Java ]

   This example displays the estimated age range for detected faces, and lists the JSON for all detected facial attributes. Change the value of `photo` to the image file name. Change the value of `amzn-s3-demo-bucket` to the Amazon S3 bucket where the image is stored.

   ```
   //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
   package aws.example.rekognition.image;
   
   import com.amazonaws.services.rekognition.AmazonRekognition;
   import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;
   import com.amazonaws.services.rekognition.model.AmazonRekognitionException;
   import com.amazonaws.services.rekognition.model.Image;
   import com.amazonaws.services.rekognition.model.S3Object;
   import com.amazonaws.services.rekognition.model.AgeRange;
   import com.amazonaws.services.rekognition.model.Attribute;
   import com.amazonaws.services.rekognition.model.DetectFacesRequest;
   import com.amazonaws.services.rekognition.model.DetectFacesResult;
   import com.amazonaws.services.rekognition.model.FaceDetail;
   import com.fasterxml.jackson.databind.ObjectMapper;
   import java.util.List;
   
   
   public class DetectFaces {
      
      
      public static void main(String[] args) throws Exception {
   
         String photo = "input.jpg";
         String bucket = "bucket";
   
         AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient();
   
   
         DetectFacesRequest request = new DetectFacesRequest()
            .withImage(new Image()
               .withS3Object(new S3Object()
                  .withName(photo)
                  .withBucket(bucket)))
            .withAttributes(Attribute.ALL);
         // Replace Attribute.ALL with Attribute.DEFAULT to get default values.
   
         try {
            DetectFacesResult result = rekognitionClient.detectFaces(request);
            List < FaceDetail > faceDetails = result.getFaceDetails();
   
            for (FaceDetail face: faceDetails) {
               if (request.getAttributes().contains("ALL")) {
                  AgeRange ageRange = face.getAgeRange();
                  System.out.println("The detected face is estimated to be between "
                     + ageRange.getLow().toString() + " and " + ageRange.getHigh().toString()
                     + " years old.");
                  System.out.println("Here's the complete set of attributes:");
               } else { // non-default attributes have null values.
                  System.out.println("Here's the default set of attributes:");
               }
   
               ObjectMapper objectMapper = new ObjectMapper();
               System.out.println(objectMapper.writerWithDefaultPrettyPrinter().writeValueAsString(face));
            }
   
         } catch (AmazonRekognitionException e) {
            e.printStackTrace();
         }
   
      }
   
   }
   ```

------
#### [ Java V2 ]

   This code is taken from the AWS Documentation SDK examples GitHub repository. See the full example [here](https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/rekognition/src/main/java/com/example/rekognition/DetectFaces.java).

   ```
   import java.util.List;
   
   //snippet-start:[rekognition.java2.detect_labels.import]
   import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
   import software.amazon.awssdk.regions.Region;
   import software.amazon.awssdk.services.rekognition.RekognitionClient;
   import software.amazon.awssdk.services.rekognition.model.RekognitionException;
   import software.amazon.awssdk.services.rekognition.model.S3Object;
   import software.amazon.awssdk.services.rekognition.model.DetectFacesRequest;
   import software.amazon.awssdk.services.rekognition.model.DetectFacesResponse;
   import software.amazon.awssdk.services.rekognition.model.Image;
   import software.amazon.awssdk.services.rekognition.model.Attribute;
   import software.amazon.awssdk.services.rekognition.model.FaceDetail;
   import software.amazon.awssdk.services.rekognition.model.AgeRange;
   
   //snippet-end:[rekognition.java2.detect_labels.import]
   
   public class DetectFaces {
   
       public static void main(String[] args) {
           final String usage = "\n" +
               "Usage: " +
               "   <bucket> <image>\n\n" +
               "Where:\n" +
               "   bucket - The name of the Amazon S3 bucket that contains the image (for example, ,amzn-s3-demo-bucket)." +
               "   image - The name of the image located in the Amazon S3 bucket (for example, Lake.png). \n\n";
   
           if (args.length != 2) {
               System.out.println(usage);
               System.exit(1);
           }
   
           String bucket = args[0];
           String image = args[1];
           Region region = Region.US_WEST_2;
           RekognitionClient rekClient = RekognitionClient.builder()
               .region(region)
               .credentialsProvider(ProfileCredentialsProvider.create("profile-name"))
               .build();
   
           getLabelsfromImage(rekClient, bucket, image);
           rekClient.close();
       }
   
       // snippet-start:[rekognition.java2.detect_labels_s3.main]
       public static void getLabelsfromImage(RekognitionClient rekClient, String bucket, String image) {
   
           try {
               S3Object s3Object = S3Object.builder()
                   .bucket(bucket)
                   .name(image)
                   .build() ;
   
               Image myImage = Image.builder()
                   .s3Object(s3Object)
                   .build();
   
               DetectFacesRequest facesRequest = DetectFacesRequest.builder()
                       .attributes(Attribute.ALL)
                       .image(myImage)
                       .build();
   
                   DetectFacesResponse facesResponse = rekClient.detectFaces(facesRequest);
                   List<FaceDetail> faceDetails = facesResponse.faceDetails();
                   for (FaceDetail face : faceDetails) {
                       AgeRange ageRange = face.ageRange();
                       System.out.println("The detected face is estimated to be between "
                                   + ageRange.low().toString() + " and " + ageRange.high().toString()
                                   + " years old.");
   
                       System.out.println("There is a smile : "+face.smile().value().toString());
                   }
   
           } catch (RekognitionException e) {
               System.out.println(e.getMessage());
               System.exit(1);
           }
       }
    // snippet-end:[rekognition.java2.detect_labels.main]
   }
   ```

------
#### [ AWS CLI ]

   This example displays the JSON output from the `detect-faces` AWS CLI operation. Replace `file` with the name of an image file. Replace `amzn-s3-demo-bucket` with the name of the Amazon S3 bucket that contains the image file.

   ```
   aws rekognition detect-faces --image "{"S3Object":{"Bucket":"amzn-s3-demo-bucket,"Name":"image-name"}}"\
                                --attributes "ALL" --profile profile-name --region region-name
   ```

    If you are accessing the CLI on a Windows device, use double quotes instead of single quotes and escape the inner double quotes by backslash (i.e. \$1) to address any parser errors you may encounter. For an example, see the following: 

   ```
   aws rekognition detect-faces --image "{\"S3Object\":{\"Bucket\":\"amzn-s3-demo-bucket\",\"Name\":\"image-name\"}}" --attributes "ALL" 
   --profile profile-name --region region-name
   ```

------
#### [ Python ]

   This example displays the estimated age range and other attributes for detected faces, and lists the JSON for all detected facial attributes. Change the value of `photo` to the image file name. Change the value of `amzn-s3-demo-bucket` to the Amazon S3 bucket where the image is stored. Replace the value of `profile_name` in the line that creates the Rekognition session with the name of your developer profile.

   ```
   import boto3
   import json
   
   def detect_faces(photo, bucket, region):
       
       session = boto3.Session(profile_name='profile-name',
                               region_name=region)
       client = session.client('rekognition', region_name=region)
   
       response = client.detect_faces(Image={'S3Object':{'Bucket':bucket,'Name':photo}},
                                      Attributes=['ALL'])
   
       print('Detected faces for ' + photo)
       for faceDetail in response['FaceDetails']:
           print('The detected face is between ' + str(faceDetail['AgeRange']['Low'])
                 + ' and ' + str(faceDetail['AgeRange']['High']) + ' years old')
   
           print('Here are the other attributes:')
           print(json.dumps(faceDetail, indent=4, sort_keys=True))
   
           # Access predictions for individual face details and print them
           print("Gender: " + str(faceDetail['Gender']))
           print("Smile: " + str(faceDetail['Smile']))
           print("Eyeglasses: " + str(faceDetail['Eyeglasses']))
           print("Face Occluded: " + str(faceDetail['FaceOccluded']))
           print("Emotions: " + str(faceDetail['Emotions'][0]))
   
       return len(response['FaceDetails'])
       
   def main():
       photo='photo'
       bucket='amzn-s3-demo-bucket'
       region='region'
       face_count=detect_faces(photo, bucket, region)
       print("Faces detected: " + str(face_count))
   
   if __name__ == "__main__":
       main()
   ```

------
#### [ .NET ]

   This example displays the estimated age range for detected faces, and lists the JSON for all detected facial attributes. Change the value of `photo` to the image file name. Change the value of `amzn-s3-demo-bucket` to the Amazon S3 bucket where the image is stored.

   ```
   //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
   using System;
   using System.Collections.Generic;
   using Amazon.Rekognition;
   using Amazon.Rekognition.Model;
   
   public class DetectFaces
   {
       public static void Example()
       {
           String photo = "input.jpg";
           String bucket = "amzn-s3-demo-bucket";
   
           AmazonRekognitionClient rekognitionClient = new AmazonRekognitionClient();
   
           DetectFacesRequest detectFacesRequest = new DetectFacesRequest()
           {
               Image = new Image()
               {
                   S3Object = new S3Object()
                   {
                       Name = photo,
                       Bucket = bucket
                   },
               },
               // Attributes can be "ALL" or "DEFAULT". 
               // "DEFAULT": BoundingBox, Confidence, Landmarks, Pose, and Quality.
               // "ALL": See https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Rekognition/TFaceDetail.html
               Attributes = new List<String>() { "ALL" }
           };
   
           try
           {
               DetectFacesResponse detectFacesResponse = rekognitionClient.DetectFaces(detectFacesRequest);
               bool hasAll = detectFacesRequest.Attributes.Contains("ALL");
               foreach(FaceDetail face in detectFacesResponse.FaceDetails)
               {
                   Console.WriteLine("BoundingBox: top={0} left={1} width={2} height={3}", face.BoundingBox.Left,
                       face.BoundingBox.Top, face.BoundingBox.Width, face.BoundingBox.Height);
                   Console.WriteLine("Confidence: {0}\nLandmarks: {1}\nPose: pitch={2} roll={3} yaw={4}\nQuality: {5}",
                       face.Confidence, face.Landmarks.Count, face.Pose.Pitch,
                       face.Pose.Roll, face.Pose.Yaw, face.Quality);
                   if (hasAll)
                       Console.WriteLine("The detected face is estimated to be between " +
                           face.AgeRange.Low + " and " + face.AgeRange.High + " years old.");
               }
           }
           catch (Exception e)
           {
               Console.WriteLine(e.Message);
           }
       }
   }
   ```

------
#### [ Ruby ]

   This example displays the estimated age range for detected faces, and lists various facial attributes. Change the value of `photo` to the image file name. Change the value of `amzn-s3-demo-bucket` to the Amazon S3 bucket where the image is stored.

   ```
      # Add to your Gemfile
      # gem 'aws-sdk-rekognition'
      require 'aws-sdk-rekognition'
      credentials = Aws::Credentials.new(
         ENV['AWS_ACCESS_KEY_ID'],
         ENV['AWS_SECRET_ACCESS_KEY']
      )
      bucket = 'bucket' # the bucketname without s3://
      photo  = 'input.jpg'# the name of file
      client   = Aws::Rekognition::Client.new credentials: credentials
      attrs = {
        image: {
          s3_object: {
            bucket: bucket,
            name: photo
          },
        },
        attributes: ['ALL']
      }
      response = client.detect_faces attrs
      puts "Detected faces for: #{photo}"
      response.face_details.each do |face_detail|
        low  = face_detail.age_range.low
        high = face_detail.age_range.high
        puts "The detected face is between: #{low} and #{high} years old"
        puts "All other attributes:"
        puts "  bounding_box.width:     #{face_detail.bounding_box.width}"
        puts "  bounding_box.height:    #{face_detail.bounding_box.height}"
        puts "  bounding_box.left:      #{face_detail.bounding_box.left}"
        puts "  bounding_box.top:       #{face_detail.bounding_box.top}"
        puts "  age.range.low:          #{face_detail.age_range.low}"
        puts "  age.range.high:         #{face_detail.age_range.high}"
        puts "  smile.value:            #{face_detail.smile.value}"
        puts "  smile.confidence:       #{face_detail.smile.confidence}"
        puts "  eyeglasses.value:       #{face_detail.eyeglasses.value}"
        puts "  eyeglasses.confidence:  #{face_detail.eyeglasses.confidence}"
        puts "  sunglasses.value:       #{face_detail.sunglasses.value}"
        puts "  sunglasses.confidence:  #{face_detail.sunglasses.confidence}"
        puts "  gender.value:           #{face_detail.gender.value}"
        puts "  gender.confidence:      #{face_detail.gender.confidence}"
        puts "  beard.value:            #{face_detail.beard.value}"
        puts "  beard.confidence:       #{face_detail.beard.confidence}"
        puts "  mustache.value:         #{face_detail.mustache.value}"
        puts "  mustache.confidence:    #{face_detail.mustache.confidence}"
        puts "  eyes_open.value:        #{face_detail.eyes_open.value}"
        puts "  eyes_open.confidence:   #{face_detail.eyes_open.confidence}"
        puts "  mout_open.value:        #{face_detail.mouth_open.value}"
        puts "  mout_open.confidence:   #{face_detail.mouth_open.confidence}"
        puts "  emotions[0].type:       #{face_detail.emotions[0].type}"
        puts "  emotions[0].confidence: #{face_detail.emotions[0].confidence}"
        puts "  landmarks[0].type:      #{face_detail.landmarks[0].type}"
        puts "  landmarks[0].x:         #{face_detail.landmarks[0].x}"
        puts "  landmarks[0].y:         #{face_detail.landmarks[0].y}"
        puts "  pose.roll:              #{face_detail.pose.roll}"
        puts "  pose.yaw:               #{face_detail.pose.yaw}"
        puts "  pose.pitch:             #{face_detail.pose.pitch}"
        puts "  quality.brightness:     #{face_detail.quality.brightness}"
        puts "  quality.sharpness:      #{face_detail.quality.sharpness}"
        puts "  confidence:             #{face_detail.confidence}"
        puts "------------"
        puts ""
      end
   ```

------
#### [ Node.js ]

   This example displays the estimated age range for detected faces, and lists various facial attributes. Change the value of `photo` to the image file name. Change the value of `amzn-s3-demo-bucket` to the Amazon S3 bucket where the image is stored.

    Replace the value of `profile_name` in the line that creates the Rekognition session with the name of your developer profile. 

   If you are using TypeScript definitions, you may need to use `import AWS from 'aws-sdk'` instead of `const AWS = require('aws-sdk')`, in order to run the program with Node.js. You can consult the [AWS SDK for Javascript](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/) for more details. Depending on how you have your configurations set up, you also may need to specify your region with `AWS.config.update({region:region});`.

   ```
   
   
   // Load the SDK
   var AWS = require('aws-sdk');
   const bucket = 'bucket-name' // the bucketname without s3://
   const photo  = 'photo-name' // the name of file
   
   var credentials = new AWS.SharedIniFileCredentials({profile: 'profile-name'});
   AWS.config.credentials = credentials;
   AWS.config.update({region:'region-name'});
   
   const client = new AWS.Rekognition();
   const params = {
     Image: {
       S3Object: {
         Bucket: bucket,
         Name: photo
       },
     },
     Attributes: ['ALL']
   }
   
   client.detectFaces(params, function(err, response) {
       if (err) {
         console.log(err, err.stack); // an error occurred
       } else {
         console.log(`Detected faces for: ${photo}`)
         response.FaceDetails.forEach(data => {
           let low  = data.AgeRange.Low
           let high = data.AgeRange.High
           console.log(`The detected face is between: ${low} and ${high} years old`)
           console.log("All other attributes:")
           console.log(`  BoundingBox.Width:      ${data.BoundingBox.Width}`)
           console.log(`  BoundingBox.Height:     ${data.BoundingBox.Height}`)
           console.log(`  BoundingBox.Left:       ${data.BoundingBox.Left}`)
           console.log(`  BoundingBox.Top:        ${data.BoundingBox.Top}`)
           console.log(`  Age.Range.Low:          ${data.AgeRange.Low}`)
           console.log(`  Age.Range.High:         ${data.AgeRange.High}`)
           console.log(`  Smile.Value:            ${data.Smile.Value}`)
           console.log(`  Smile.Confidence:       ${data.Smile.Confidence}`)
           console.log(`  Eyeglasses.Value:       ${data.Eyeglasses.Value}`)
           console.log(`  Eyeglasses.Confidence:  ${data.Eyeglasses.Confidence}`)
           console.log(`  Sunglasses.Value:       ${data.Sunglasses.Value}`)
           console.log(`  Sunglasses.Confidence:  ${data.Sunglasses.Confidence}`)
           console.log(`  Gender.Value:           ${data.Gender.Value}`)
           console.log(`  Gender.Confidence:      ${data.Gender.Confidence}`)
           console.log(`  Beard.Value:            ${data.Beard.Value}`)
           console.log(`  Beard.Confidence:       ${data.Beard.Confidence}`)
           console.log(`  Mustache.Value:         ${data.Mustache.Value}`)
           console.log(`  Mustache.Confidence:    ${data.Mustache.Confidence}`)
           console.log(`  EyesOpen.Value:         ${data.EyesOpen.Value}`)
           console.log(`  EyesOpen.Confidence:    ${data.EyesOpen.Confidence}`)
           console.log(`  MouthOpen.Value:        ${data.MouthOpen.Value}`)
           console.log(`  MouthOpen.Confidence:   ${data.MouthOpen.Confidence}`)
           console.log(`  Emotions[0].Type:       ${data.Emotions[0].Type}`)
           console.log(`  Emotions[0].Confidence: ${data.Emotions[0].Confidence}`)
           console.log(`  Landmarks[0].Type:      ${data.Landmarks[0].Type}`)
           console.log(`  Landmarks[0].X:         ${data.Landmarks[0].X}`)
           console.log(`  Landmarks[0].Y:         ${data.Landmarks[0].Y}`)
           console.log(`  Pose.Roll:              ${data.Pose.Roll}`)
           console.log(`  Pose.Yaw:               ${data.Pose.Yaw}`)
           console.log(`  Pose.Pitch:             ${data.Pose.Pitch}`)
           console.log(`  Quality.Brightness:     ${data.Quality.Brightness}`)
           console.log(`  Quality.Sharpness:      ${data.Quality.Sharpness}`)
           console.log(`  Confidence:             ${data.Confidence}`)
           console.log("------------")
           console.log("")
         }) // for response.faceDetails
       } // if
     });
   ```

------

## DetectFaces operation request
<a name="detectfaces-request"></a>

The input to `DetectFaces` is an image. In this example, the image is loaded from an Amazon S3 bucket. The `Attributes` parameter specifies that all facial attributes should be returned. For more information, see [Working with images](images.md).

```
{
    "Image": {
        "S3Object": {
            "Bucket": "amzn-s3-demo-bucket",
            "Name": "input.jpg"
        }
    },
    "Attributes": [
        "ALL"
    ]
}
```

## DetectFaces operation response
<a name="detectfaces-response"></a>

 `DetectFaces` returns the following information for each detected face:


+ **Bounding box** – The coordinates of the bounding box that surrounds the face.
+ **Confidence** – The level of confidence that the bounding box contains a face. 
+ **Facial landmarks** – An array of facial landmarks. For each landmark (such as the left eye, right eye, and mouth), the response provides the x and y coordinates.
+ **Facial attributes** – A set of facial attributes, such as whether the face is occluded, returned as a `FaceDetail` object. The set includes: AgeRange, Beard, Emotions, EyeDirection, Eyeglasses, EyesOpen, FaceOccluded, Gender, MouthOpen, Mustache, Smile, and Sunglasses. For each such attribute, the response provides a value. The value can be of different types, such as a Boolean type (whether a person is wearing sunglasses), a string (whether the person is male or female), or an angular degree value (for pitch/yaw of eye gaze directions). In addition, for most attributes, the response also provides a confidence in the detected value for the attribute. Note that while FaceOccluded and EyeDirection attributes are supported when using `DetectFaces`, they aren't supported when analyzing videos with `StartFaceDetection` and `GetFaceDetection`.
+ **Quality** – Describes the brightness and the sharpness of the face. For information about ensuring the best possible face detection, see [Recommendations for facial comparison input images](recommendations-facial-input-images.md).
+ **Pose** – Describes the rotation of the face inside the image.

The request can depict an array of facial attributes you want to be returned. A `DEFAULT` subset of facial attributes - `BoundingBox`, `Confidence`, `Pose`, `Quality`, and `Landmarks` - will always be returned. You can request the return of specific facial attributes (in addition to the default list) - by using `["DEFAULT", "FACE_OCCLUDED", "EYE_DIRECTION"]` or just one attribute, like `["FACE_OCCLUDED"]`. You can request for all facial attributes by using `["ALL"]`. Requesting more attributes may increase response time. 

The following is an example response of a `DetectFaces` API call: 

```
{
  "FaceDetails": [
    {
      "BoundingBox": {
        "Width": 0.7919622659683228,
        "Height": 0.7510867118835449,
        "Left": 0.08881539851427078,
        "Top": 0.151064932346344
      },
      "AgeRange": {
        "Low": 18,
        "High": 26
      },
      "Smile": {
        "Value": false,
        "Confidence": 89.77348327636719
      },
      "Eyeglasses": {
        "Value": true,
        "Confidence": 99.99996948242188
      },
      "Sunglasses": {
        "Value": true,
        "Confidence": 93.65237426757812
      },
      "Gender": {
        "Value": "Female",
        "Confidence": 99.85968780517578
      },
      "Beard": {
        "Value": false,
        "Confidence": 77.52591705322266
      },
      "Mustache": {
        "Value": false,
        "Confidence": 94.48904418945312
      },
      "EyesOpen": {
        "Value": true,
        "Confidence": 98.57169342041016
      },
      "MouthOpen": {
        "Value": false,
        "Confidence": 74.33953094482422
      },
      "Emotions": [
        {
          "Type": "SAD",
          "Confidence": 65.56403350830078
        },
        {
          "Type": "CONFUSED",
          "Confidence": 31.277774810791016
        },
        {
          "Type": "DISGUSTED",
          "Confidence": 15.553778648376465
        },
        {
          "Type": "ANGRY",
          "Confidence": 8.012762069702148
        },
        {
          "Type": "SURPRISED",
          "Confidence": 7.621500015258789
        },
        {
          "Type": "FEAR",
          "Confidence": 7.243380546569824
        },
        {
          "Type": "CALM",
          "Confidence": 5.8196024894714355
        },
        {
          "Type": "HAPPY",
          "Confidence": 2.2830512523651123
        }
      ],
      "Landmarks": [
        {
          "Type": "eyeLeft",
          "X": 0.30225440859794617,
          "Y": 0.41018882393836975
        },
        {
          "Type": "eyeRight",
          "X": 0.6439348459243774,
          "Y": 0.40341562032699585
        },
        {
          "Type": "mouthLeft",
          "X": 0.343580037355423,
          "Y": 0.6951127648353577
        },
        {
          "Type": "mouthRight",
          "X": 0.6306480765342712,
          "Y": 0.6898072361946106
        },
        {
          "Type": "nose",
          "X": 0.47164231538772583,
          "Y": 0.5763645172119141
        },
        {
          "Type": "leftEyeBrowLeft",
          "X": 0.1732882857322693,
          "Y": 0.34452149271965027
        },
        {
          "Type": "leftEyeBrowRight",
          "X": 0.3655243515968323,
          "Y": 0.33231860399246216
        },
        {
          "Type": "leftEyeBrowUp",
          "X": 0.2671719491481781,
          "Y": 0.31669262051582336
        },
        {
          "Type": "rightEyeBrowLeft",
          "X": 0.5613729953765869,
          "Y": 0.32813435792922974
        },
        {
          "Type": "rightEyeBrowRight",
          "X": 0.7665090560913086,
          "Y": 0.3318614959716797
        },
        {
          "Type": "rightEyeBrowUp",
          "X": 0.6612788438796997,
          "Y": 0.3082450032234192
        },
        {
          "Type": "leftEyeLeft",
          "X": 0.2416982799768448,
          "Y": 0.4085965156555176
        },
        {
          "Type": "leftEyeRight",
          "X": 0.36943578720092773,
          "Y": 0.41230902075767517
        },
        {
          "Type": "leftEyeUp",
          "X": 0.29974061250686646,
          "Y": 0.3971870541572571
        },
        {
          "Type": "leftEyeDown",
          "X": 0.30360740423202515,
          "Y": 0.42347756028175354
        },
        {
          "Type": "rightEyeLeft",
          "X": 0.5755768418312073,
          "Y": 0.4081145226955414
        },
        {
          "Type": "rightEyeRight",
          "X": 0.7050536870956421,
          "Y": 0.39924031496047974
        },
        {
          "Type": "rightEyeUp",
          "X": 0.642906129360199,
          "Y": 0.39026668667793274
        },
        {
          "Type": "rightEyeDown",
          "X": 0.6423097848892212,
          "Y": 0.41669243574142456
        },
        {
          "Type": "noseLeft",
          "X": 0.4122826159000397,
          "Y": 0.5987403392791748
        },
        {
          "Type": "noseRight",
          "X": 0.5394935011863708,
          "Y": 0.5960900187492371
        },
        {
          "Type": "mouthUp",
          "X": 0.478581964969635,
          "Y": 0.6660456657409668
        },
        {
          "Type": "mouthDown",
          "X": 0.483366996049881,
          "Y": 0.7497162818908691
        },
        {
          "Type": "leftPupil",
          "X": 0.30225440859794617,
          "Y": 0.41018882393836975
        },
        {
          "Type": "rightPupil",
          "X": 0.6439348459243774,
          "Y": 0.40341562032699585
        },
        {
          "Type": "upperJawlineLeft",
          "X": 0.11031254380941391,
          "Y": 0.3980775475502014
        },
        {
          "Type": "midJawlineLeft",
          "X": 0.19301874935626984,
          "Y": 0.7034031748771667
        },
        {
          "Type": "chinBottom",
          "X": 0.4939905107021332,
          "Y": 0.8877836465835571
        },
        {
          "Type": "midJawlineRight",
          "X": 0.7990140914916992,
          "Y": 0.6899225115776062
        },
        {
          "Type": "upperJawlineRight",
          "X": 0.8548634648323059,
          "Y": 0.38160091638565063
        }
      ],
      "Pose": {
        "Roll": -5.83309268951416,
        "Yaw": -2.4244730472564697,
        "Pitch": 2.6216139793395996
      },
      "Quality": {
        "Brightness": 96.16363525390625,
        "Sharpness": 95.51618957519531
      },
      "Confidence": 99.99872589111328,
      "FaceOccluded": {
        "Value": true,
        "Confidence": 99.99726104736328
      },
      "EyeDirection": {
        "Yaw": 16.299732,
        "Pitch": -6.407457,
        "Confidence": 99.968704
      }
    }
  ],
  "ResponseMetadata": {
    "RequestId": "8bf02607-70b7-4f20-be55-473fe1bba9a2",
    "HTTPStatusCode": 200,
    "HTTPHeaders": {
      "x-amzn-requestid": "8bf02607-70b7-4f20-be55-473fe1bba9a2",
      "content-type": "application/x-amz-json-1.1",
      "content-length": "3409",
      "date": "Wed, 26 Apr 2023 20:18:50 GMT"
    },
    "RetryAttempts": 0
  }
}
```

Note the following:
+ The `Pose` data describes the rotation of the face detected. You can use the combination of the `BoundingBox` and `Pose` data to draw the bounding box around faces that your application displays.
+ The `Quality` describes the brightness and the sharpness of the face. You might find this useful to compare faces across images and find the best face.
+ The preceding response shows all facial `landmarks` the service can detect, all facial attributes and emotions. To get all of these in the response, you must specify the `attributes` parameter with value `ALL`. By default, the `DetectFaces` API returns only the following five facial attributes: `BoundingBox`, `Confidence`, `Pose`, `Quality` and `landmarks`. The default landmarks returned are: `eyeLeft`, `eyeRight`, `nose`, `mouthLeft`, and `mouthRight`. 

  

# Comparing faces in images
<a name="faces-comparefaces"></a>

With Rekognition you can compare faces between two images using the [CompareFaces](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_CompareFaces.html) operation. This feature is useful for applications like identity verification or photo matching.

CompareFaces compares a face in the *source* image with each face in the *target* image. Images are passed to CompareFaces as either: 
+ A base64-encoded representation of an image.
+ Amazon S3 objects. 

**Face Detection vs Face Comparison **

Face comparison is different from face detection. Face detection (which uses DetectFaces) only identifies the presence and location of faces in an image or video. In contrast, face comparison involves comparing a detected face in a source image to faces in a target image to find matches.

**Similarity thresholds**

Use the `similarityThreshold` parameter to define the minimum confidence level for matches to be included in the response. By default, only faces with a similarity score of greater than or equal to 80% are returned in the response.

**Note**  
`CompareFaces` uses machine learning algorithms, which are probabilistic. A false negative is an incorrect prediction that a face in the target image has a low similarity confidence score when compared to the face in the source image. To reduce the probability of false negatives, we recommend that you compare the target image against multiple source images. If you plan to use `CompareFaces` to make a decision that impacts an individual's rights, privacy, or access to services, we recommend that you pass the result to a human for review and further validation before taking action.

 

The following code examples demonstrate how to use the CompareFaces operations for various AWS SDKs. In the AWS CLI example, you upload two JPEG images to your Amazon S3 bucket and specify the object key name. In the other examples, you load two files from the local file system and input them as image byte arrays. 

**To compare faces**

1. If you haven't already:

   1. Create or update a user with `AmazonRekognitionFullAccess` and `AmazonS3ReadOnlyAccess` (AWS CLI example only) permissions. For more information, see [Step 1: Set up an AWS account and create a User](setting-up.md#setting-up-iam).

   1. Install and configure the AWS CLI and the AWS SDKs. For more information, see [Step 2: Set up the AWS CLI and AWS SDKs](setup-awscli-sdk.md).

1. Use the following example code to call the `CompareFaces` operation.

------
#### [ Java ]

   This example displays information about matching faces in source and target images that are loaded from the local file system.

   Replace the values of `sourceImage` and `targetImage` with the path and file name of the source and target images.

   ```
   //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
   package aws.example.rekognition.image;
   import com.amazonaws.services.rekognition.AmazonRekognition;
   import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;
   import com.amazonaws.services.rekognition.model.Image;
   import com.amazonaws.services.rekognition.model.BoundingBox;
   import com.amazonaws.services.rekognition.model.CompareFacesMatch;
   import com.amazonaws.services.rekognition.model.CompareFacesRequest;
   import com.amazonaws.services.rekognition.model.CompareFacesResult;
   import com.amazonaws.services.rekognition.model.ComparedFace;
   import java.util.List;
   import java.io.File;
   import java.io.FileInputStream;
   import java.io.InputStream;
   import java.nio.ByteBuffer;
   import com.amazonaws.util.IOUtils;
   
   public class CompareFaces {
   
      public static void main(String[] args) throws Exception{
          Float similarityThreshold = 70F;
          String sourceImage = "source.jpg";
          String targetImage = "target.jpg";
          ByteBuffer sourceImageBytes=null;
          ByteBuffer targetImageBytes=null;
   
          AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient();
   
          //Load source and target images and create input parameters
          try (InputStream inputStream = new FileInputStream(new File(sourceImage))) {
             sourceImageBytes = ByteBuffer.wrap(IOUtils.toByteArray(inputStream));
          }
          catch(Exception e)
          {
              System.out.println("Failed to load source image " + sourceImage);
              System.exit(1);
          }
          try (InputStream inputStream = new FileInputStream(new File(targetImage))) {
              targetImageBytes = ByteBuffer.wrap(IOUtils.toByteArray(inputStream));
          }
          catch(Exception e)
          {
              System.out.println("Failed to load target images: " + targetImage);
              System.exit(1);
          }
   
          Image source=new Image()
               .withBytes(sourceImageBytes);
          Image target=new Image()
               .withBytes(targetImageBytes);
   
          CompareFacesRequest request = new CompareFacesRequest()
                  .withSourceImage(source)
                  .withTargetImage(target)
                  .withSimilarityThreshold(similarityThreshold);
   
          // Call operation
          CompareFacesResult compareFacesResult=rekognitionClient.compareFaces(request);
   
   
          // Display results
          List <CompareFacesMatch> faceDetails = compareFacesResult.getFaceMatches();
          for (CompareFacesMatch match: faceDetails){
            ComparedFace face= match.getFace();
            BoundingBox position = face.getBoundingBox();
            System.out.println("Face at " + position.getLeft().toString()
                  + " " + position.getTop()
                  + " matches with " + match.getSimilarity().toString()
                  + "% confidence.");
   
          }
          List<ComparedFace> uncompared = compareFacesResult.getUnmatchedFaces();
   
          System.out.println("There was " + uncompared.size()
               + " face(s) that did not match");
      }
   }
   ```

------
#### [ Java V2 ]

   This code is taken from the AWS Documentation SDK examples GitHub repository. See the full example [here](https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/rekognition/src/main/java/com/example/rekognition/CompareFaces.java).

   ```
   import java.util.List;
   
   import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
   import software.amazon.awssdk.regions.Region;
   import software.amazon.awssdk.services.rekognition.RekognitionClient;
   import software.amazon.awssdk.services.rekognition.model.RekognitionException;
   import software.amazon.awssdk.services.rekognition.model.Image;
   import software.amazon.awssdk.services.rekognition.model.BoundingBox;
   import software.amazon.awssdk.services.rekognition.model.CompareFacesMatch;
   import software.amazon.awssdk.services.rekognition.model.CompareFacesRequest;
   import software.amazon.awssdk.services.rekognition.model.CompareFacesResponse;
   import software.amazon.awssdk.services.rekognition.model.ComparedFace;
   import software.amazon.awssdk.core.SdkBytes;
   import java.io.FileInputStream;
   import java.io.FileNotFoundException;
   import java.io.InputStream;
   
   // snippet-end:[rekognition.java2.detect_faces.import]
   
   /**
    * Before running this Java V2 code example, set up your development environment, including your credentials.
    *
    * For more information, see the following documentation topic:
    *
    * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
    */
   public class CompareFaces {
   
       public static void main(String[] args) {
   
           final String usage = "\n" +
               "Usage: " +
               "   <pathSource> <pathTarget>\n\n" +
               "Where:\n" +
               "   pathSource - The path to the source image (for example, C:\\AWS\\pic1.png). \n " +
               "   pathTarget - The path to the target image (for example, C:\\AWS\\pic2.png). \n\n";
   
           if (args.length != 2) {
               System.out.println(usage);
               System.exit(1);
           }
   
           Float similarityThreshold = 70F;
           String sourceImage = args[0];
           String targetImage = args[1];
           Region region = Region.US_EAST_1;
           RekognitionClient rekClient = RekognitionClient.builder()
               .region(region)
               .credentialsProvider(ProfileCredentialsProvider.create("profile-name"))
               .build();
   
           compareTwoFaces(rekClient, similarityThreshold, sourceImage, targetImage);
           rekClient.close();
      }
   
       // snippet-start:[rekognition.java2.compare_faces.main]
       public static void compareTwoFaces(RekognitionClient rekClient, Float similarityThreshold, String sourceImage, String targetImage) {
           try {
               InputStream sourceStream = new FileInputStream(sourceImage);
               InputStream tarStream = new FileInputStream(targetImage);
               SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream);
               SdkBytes targetBytes = SdkBytes.fromInputStream(tarStream);
   
               // Create an Image object for the source image.
               Image souImage = Image.builder()
                   .bytes(sourceBytes)
                   .build();
   
               Image tarImage = Image.builder()
                   .bytes(targetBytes)
                   .build();
   
               CompareFacesRequest facesRequest = CompareFacesRequest.builder()
                   .sourceImage(souImage)
                   .targetImage(tarImage)
                   .similarityThreshold(similarityThreshold)
                   .build();
   
               // Compare the two images.
               CompareFacesResponse compareFacesResult = rekClient.compareFaces(facesRequest);
               List<CompareFacesMatch> faceDetails = compareFacesResult.faceMatches();
               for (CompareFacesMatch match: faceDetails){
                   ComparedFace face= match.face();
                   BoundingBox position = face.boundingBox();
                   System.out.println("Face at " + position.left().toString()
                           + " " + position.top()
                           + " matches with " + face.confidence().toString()
                           + "% confidence.");
   
               }
               List<ComparedFace> uncompared = compareFacesResult.unmatchedFaces();
               System.out.println("There was " + uncompared.size() + " face(s) that did not match");
               System.out.println("Source image rotation: " + compareFacesResult.sourceImageOrientationCorrection());
               System.out.println("target image rotation: " + compareFacesResult.targetImageOrientationCorrection());
   
           } catch(RekognitionException | FileNotFoundException e) {
               System.out.println("Failed to load source image " + sourceImage);
               System.exit(1);
           }
       }
       // snippet-end:[rekognition.java2.compare_faces.main]
   }
   ```

------
#### [ AWS CLI ]

   This example displays the JSON output from the `compare-faces` AWS CLI operation. 

   Replace `amzn-s3-demo-bucket` with the name of the Amazon S3 bucket that contains the source and target images. Replace `source.jpg` and `target.jpg` with the file names for the source and target images.

   ```
   aws rekognition compare-faces --target-image \
   "{"S3Object":{"Bucket":"amzn-s3-demo-bucket","Name":"image-name"}}" \
   --source-image "{"S3Object":{"Bucket":"amzn-s3-demo-bucket","Name":"image-name"}}" 
   --profile profile-name
   ```

    If you are accessing the CLI on a Windows device, use double quotes instead of single quotes and escape the inner double quotes by backslash (i.e. \$1) to address any parser errors you may encounter. For an example, see the following: 

   ```
   aws rekognition compare-faces --target-image "{\"S3Object\":{\"Bucket\":\"amzn-s3-demo-bucket\",\"Name\":\"image-name\"}}" \ 
   --source-image "{\"S3Object\":{\"Bucket\":\"amzn-s3-demo-bucket\",\"Name\":\"image-name\"}}" --profile profile-name
   ```

------
#### [ Python ]

   This example displays information about matching faces in source and target images that are loaded from the local file system.

   Replace the values of `source_file` and `target_file` with the path and file name of the source and target images. Replace the value of `profile_name` in the line that creates the Rekognition session with the name of your developer profile. 

   ```
   # Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   # PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
   import boto3
   
   def compare_faces(sourceFile, targetFile):
   
       session = boto3.Session(profile_name='profile-name')
       client = session.client('rekognition')
   
       imageSource = open(sourceFile, 'rb')
       imageTarget = open(targetFile, 'rb')
   
       response = client.compare_faces(SimilarityThreshold=80,
                                       SourceImage={'Bytes': imageSource.read()},
                                       TargetImage={'Bytes': imageTarget.read()})
   
       for faceMatch in response['FaceMatches']:
           position = faceMatch['Face']['BoundingBox']
           similarity = str(faceMatch['Similarity'])
           print('The face at ' +
                 str(position['Left']) + ' ' +
                 str(position['Top']) +
                 ' matches with ' + similarity + '% confidence')
   
       imageSource.close()
       imageTarget.close()
       return len(response['FaceMatches'])
   
   def main():
       source_file = 'source-file-name'
       target_file = 'target-file-name'
       face_matches = compare_faces(source_file, target_file)
       print("Face matches: " + str(face_matches))
   
   if __name__ == "__main__":
       main()
   ```

------
#### [ .NET ]

   This example displays information about matching faces in source and target images that are loaded from the local file system.

   Replace the values of `sourceImage` and `targetImage` with the path and file name of the source and target images.

   ```
   //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
   using System;
   using System.IO;
   using Amazon.Rekognition;
   using Amazon.Rekognition.Model;
   
   public class CompareFaces
   {
       public static void Example()
       {
           float similarityThreshold = 70F;
           String sourceImage = "source.jpg";
           String targetImage = "target.jpg";
   
           AmazonRekognitionClient rekognitionClient = new AmazonRekognitionClient();
   
           Amazon.Rekognition.Model.Image imageSource = new Amazon.Rekognition.Model.Image();
           try
           {
               using (FileStream fs = new FileStream(sourceImage, FileMode.Open, FileAccess.Read))
               {
                   byte[] data = new byte[fs.Length];
                   fs.Read(data, 0, (int)fs.Length);
                   imageSource.Bytes = new MemoryStream(data);
               }
           }
           catch (Exception)
           {
               Console.WriteLine("Failed to load source image: " + sourceImage);
               return;
           }
   
           Amazon.Rekognition.Model.Image imageTarget = new Amazon.Rekognition.Model.Image();
           try
           {
               using (FileStream fs = new FileStream(targetImage, FileMode.Open, FileAccess.Read))
               {
                   byte[] data = new byte[fs.Length];
                   data = new byte[fs.Length];
                   fs.Read(data, 0, (int)fs.Length);
                   imageTarget.Bytes = new MemoryStream(data);
               }
           }
           catch (Exception)
           {
               Console.WriteLine("Failed to load target image: " + targetImage);
               return;
           }
   
           CompareFacesRequest compareFacesRequest = new CompareFacesRequest()
           {
               SourceImage = imageSource,
               TargetImage = imageTarget,
               SimilarityThreshold = similarityThreshold
           };
   
           // Call operation
           CompareFacesResponse compareFacesResponse = rekognitionClient.CompareFaces(compareFacesRequest);
   
           // Display results
           foreach(CompareFacesMatch match in compareFacesResponse.FaceMatches)
           {
               ComparedFace face = match.Face;
               BoundingBox position = face.BoundingBox;
               Console.WriteLine("Face at " + position.Left
                     + " " + position.Top
                     + " matches with " + match.Similarity
                     + "% confidence.");
           }
   
           Console.WriteLine("There was " + compareFacesResponse.UnmatchedFaces.Count + " face(s) that did not match");
   
       }
   }
   ```

------
#### [ Ruby ]

   This example displays information about matching faces in source and target images that are loaded from the local file system.

   Replace the values of `photo_source` and `photo_target` with the path and file name of the source and target images.

   ```
     # Add to your Gemfile
      # gem 'aws-sdk-rekognition'
      require 'aws-sdk-rekognition'
      credentials = Aws::Credentials.new(
         ENV['AWS_ACCESS_KEY_ID'],
         ENV['AWS_SECRET_ACCESS_KEY']
      )
      bucket        = 'bucket' # the bucketname without s3://
      photo_source  = 'source.jpg'
      photo_target  = 'target.jpg'
      client   = Aws::Rekognition::Client.new credentials: credentials
      attrs = {
        source_image: {
          s3_object: {
            bucket: bucket,
            name: photo_source
          },
        },
        target_image: {
          s3_object: {
            bucket: bucket,
            name: photo_target
          },
        },
        similarity_threshold: 70
      }
      response = client.compare_faces attrs
      response.face_matches.each do |face_match|
        position   = face_match.face.bounding_box
        similarity = face_match.similarity
        puts "The face at: #{position.left}, #{position.top} matches with #{similarity} % confidence"
      end
   ```

------
#### [ Node.js ]

   This example displays information about matching faces in source and target images that are loaded from the local file system.

   Replace the values of `photo_source` and `photo_target` with the path and file name of the source and target images. Replace the value of `profile_name` in the line that creates the Rekognition session with the name of your developer profile. 

   ```
   // Load the SDK
   var AWS = require('aws-sdk');
   const bucket = 'bucket-name' // the bucket name without s3://
   const photo_source  = 'photo-source-name' // path and the name of file
   const photo_target = 'photo-target-name'
   
   var credentials = new AWS.SharedIniFileCredentials({profile: 'profile-name'});
   AWS.config.credentials = credentials;
   AWS.config.update({region:'region-name'});
   
   const client = new AWS.Rekognition();
      const params = {
        SourceImage: {
          S3Object: {
            Bucket: bucket,
            Name: photo_source
          },
        },
        TargetImage: {
          S3Object: {
            Bucket: bucket,
            Name: photo_target
          },
        },
        SimilarityThreshold: 70
      }
      client.compareFaces(params, function(err, response) {
        if (err) {
          console.log(err, err.stack); // an error occurred
        } else {
          response.FaceMatches.forEach(data => {
            let position   = data.Face.BoundingBox
            let similarity = data.Similarity
            console.log(`The face at: ${position.Left}, ${position.Top} matches with ${similarity} % confidence`)
          }) // for response.faceDetails
        } // if
      });
   ```

------

## CompareFaces operation request
<a name="comparefaces-request"></a>

The input to `CompareFaces` is an image. In this example, the source and target images are loaded from the local file system. The `SimilarityThreshold` input parameter specifies the minimum confidence that compared faces must match to be included in the response. For more information, see [Working with images](images.md).

```
{
    "SourceImage": {
        "Bytes": "/9j/4AAQSk2Q==..."
    },
    "TargetImage": {
        "Bytes": "/9j/4O1Q==..."
    },
    "SimilarityThreshold": 70
}
```

## CompareFaces operation response
<a name="comparefaces-response"></a>

The response includes:
+ An array of face matches: A list of matched faces with similarity scores and metadata for each matching face. If multiple faces match, the `faceMatches`

   array includes all of the face matches.
+ Face match details: Each matched face also provides a bounding box, confidence value, landmark locations, and similarity score.
+ A list of unmatched faces: The response also includes faces from the target image that didn’t match the source image face. Includes a bounding box for each unmatched face.
+ Source face information: Includes information about the face from the source image that was used for comparison, including the bounding box and confidence value. 

The example shows that one face match was found in the target image. For that face match, it provides a bounding box and a confidence value (the level of confidence that Amazon Rekognition has that the bounding box contains a face). The similarity score of 99.99 indicates how similar the faces are. The example also shows one face that Amazon Rekognition found in the target image that didn't match the face that was analyzed in the source image.

```
{
    "FaceMatches": [{
        "Face": {
            "BoundingBox": {
                "Width": 0.5521978139877319,
                "Top": 0.1203877404332161,
                "Left": 0.23626373708248138,
                "Height": 0.3126954436302185
            },
            "Confidence": 99.98751068115234,
            "Pose": {
                "Yaw": -82.36799621582031,
                "Roll": -62.13221740722656,
                "Pitch": 0.8652129173278809
            },
            "Quality": {
                "Sharpness": 99.99880981445312,
                "Brightness": 54.49755096435547
            },
            "Landmarks": [{
                    "Y": 0.2996366024017334,
                    "X": 0.41685718297958374,
                    "Type": "eyeLeft"
                },
                {
                    "Y": 0.2658946216106415,
                    "X": 0.4414493441581726,
                    "Type": "eyeRight"
                },
                {
                    "Y": 0.3465650677680969,
                    "X": 0.48636093735694885,
                    "Type": "nose"
                },
                {
                    "Y": 0.30935320258140564,
                    "X": 0.6251809000968933,
                    "Type": "mouthLeft"
                },
                {
                    "Y": 0.26942989230155945,
                    "X": 0.6454493403434753,
                    "Type": "mouthRight"
                }
            ]
        },
        "Similarity": 100.0
    }],
    "SourceImageOrientationCorrection": "ROTATE_90",
    "TargetImageOrientationCorrection": "ROTATE_90",
    "UnmatchedFaces": [{
        "BoundingBox": {
            "Width": 0.4890109896659851,
            "Top": 0.6566604375839233,
            "Left": 0.10989011079072952,
            "Height": 0.278298944234848
        },
        "Confidence": 99.99992370605469,
        "Pose": {
            "Yaw": 51.51519012451172,
            "Roll": -110.32493591308594,
            "Pitch": -2.322134017944336
        },
        "Quality": {
            "Sharpness": 99.99671173095703,
            "Brightness": 57.23163986206055
        },
        "Landmarks": [{
                "Y": 0.8288310766220093,
                "X": 0.3133862614631653,
                "Type": "eyeLeft"
            },
            {
                "Y": 0.7632885575294495,
                "X": 0.28091415762901306,
                "Type": "eyeRight"
            },
            {
                "Y": 0.7417283654212952,
                "X": 0.3631140887737274,
                "Type": "nose"
            },
            {
                "Y": 0.8081989884376526,
                "X": 0.48565614223480225,
                "Type": "mouthLeft"
            },
            {
                "Y": 0.7548204660415649,
                "X": 0.46090251207351685,
                "Type": "mouthRight"
            }
        ]
    }],
    "SourceImageFace": {
        "BoundingBox": {
            "Width": 0.5521978139877319,
            "Top": 0.1203877404332161,
            "Left": 0.23626373708248138,
            "Height": 0.3126954436302185
        },
        "Confidence": 99.98751068115234
    }
}
```

# Detecting faces in a stored video
<a name="faces-sqs-video"></a>

Amazon Rekognition Video can detect faces in videos that are stored in an Amazon S3 bucket and provide information such as: 
+ The time or times faces are detected in a video.
+ The location of faces in the video frame at the time they were detected.
+ Facial landmarks such as the position of the left eye. 
+ Additional attributes as explained on the [Guidelines on face attributes](guidance-face-attributes.md) page.

Amazon Rekognition Video face detection in stored videos is an asynchronous operation. To start the detection of faces in videos, call [StartFaceDetection](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_StartFaceDetection.html). Amazon Rekognition Video publishes the completion status of the video analysis to an Amazon Simple Notification Service (Amazon SNS) topic. If the video analysis is successful, you can call [GetFaceDetection](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_GetFaceDetection.html) to get the results of the video analysis. For more information about starting video analysis and getting the results, see [Calling Amazon Rekognition Video operations](api-video.md). 

This procedure expands on the code in [Analyzing a video stored in an Amazon S3 bucket with Java or Python (SDK)](video-analyzing-with-sqs.md), which uses an Amazon Simple Queue Service (Amazon SQS) queue to get the completion status of a video analysis request. 

**To detect faces in a video stored in an Amazon S3 bucket (SDK)**

1. Perform [Analyzing a video stored in an Amazon S3 bucket with Java or Python (SDK)](video-analyzing-with-sqs.md).

1. Add the following code to the class `VideoDetect` that you created in step 1.

------
#### [ AWS CLI ]
   + In the following code sample, change `amzn-s3-demo-bucket` and `video-name` to the Amazon S3 bucket name and file name that you specified in step 2.
   + Change `region-name` to the AWS region that you're using. Replace the value of `profile_name` with the name of your developer profile.
   + Change `TopicARN` to the ARN of the Amazon SNS topic you created in step 3 of [Configuring Amazon Rekognition Video](api-video-roles.md).
   + Change `RoleARN` to the ARN of the IAM service role you created in step 7 of [Configuring Amazon Rekognition Video](api-video-roles.md).

   ```
   aws rekognition start-face-detection --video "{"S3Object":{"Bucket":"amzn-s3-demo-bucket","Name":"Video-Name"}}" --notification-channel
   "{"SNSTopicArn":"Topic-ARN","RoleArn":"Role-ARN"}" --region region-name --profile profile-name
   ```

   If you are accessing the CLI on a Windows device, use double quotes instead of single quotes and escape the inner double quotes by backslash (i.e. \$1) to address any parser errors you may encounter. For an example, see the following: 

   ```
   aws rekognition start-face-detection --video "{\"S3Object\":{\"Bucket\":\"amzn-s3-demo-bucket\",\"Name\":\"Video-Name\"}}" --notification-channel
   "{\"SNSTopicArn\":\"Topic-ARN\",\"RoleArn\":\"Role-ARN\"}" --region region-name --profile profile-name
   ```

   After running the `StartFaceDetection` operation and getting the job ID number, run the following `GetFaceDetection` operation and provide the job ID number:

   ```
   aws rekognition get-face-detection --job-id job-id-number  --profile profile-name
   ```

------
#### [ Java ]

   ```
   //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
   
   private static void StartFaceDetection(String bucket, String video) throws Exception{
            
       NotificationChannel channel= new NotificationChannel()
               .withSNSTopicArn(snsTopicArn)
               .withRoleArn(roleArn);
       
       StartFaceDetectionRequest req = new StartFaceDetectionRequest()
               .withVideo(new Video()
                       .withS3Object(new S3Object()
                           .withBucket(bucket)
                           .withName(video)))
               .withNotificationChannel(channel);
                           
                           
       
       StartFaceDetectionResult startLabelDetectionResult = rek.startFaceDetection(req);
       startJobId=startLabelDetectionResult.getJobId();
       
   } 
   
   private static void GetFaceDetectionResults() throws Exception{
       
       int maxResults=10;
       String paginationToken=null;
       GetFaceDetectionResult faceDetectionResult=null;
       
       do{
           if (faceDetectionResult !=null){
               paginationToken = faceDetectionResult.getNextToken();
           }
       
           faceDetectionResult = rek.getFaceDetection(new GetFaceDetectionRequest()
                .withJobId(startJobId)
                .withNextToken(paginationToken)
                .withMaxResults(maxResults));
       
           VideoMetadata videoMetaData=faceDetectionResult.getVideoMetadata();
               
           System.out.println("Format: " + videoMetaData.getFormat());
           System.out.println("Codec: " + videoMetaData.getCodec());
           System.out.println("Duration: " + videoMetaData.getDurationMillis());
           System.out.println("FrameRate: " + videoMetaData.getFrameRate());
               
               
           //Show faces, confidence and detection times
           List<FaceDetection> faces= faceDetectionResult.getFaces();
        
           for (FaceDetection face: faces) { 
               long seconds=face.getTimestamp()/1000;
               System.out.print("Sec: " + Long.toString(seconds) + " ");
               System.out.println(face.getFace().toString());
               System.out.println();           
           }
       } while (faceDetectionResult !=null && faceDetectionResult.getNextToken() != null);
         
           
   }
   ```

   In the function `main`, replace the lines: 

   ```
           StartLabelDetection(amzn-s3-demo-bucket, video);
   
           if (GetSQSMessageSuccess()==true)
           	GetLabelDetectionResults();
   ```

   with:

   ```
           StartFaceDetection(amzn-s3-demo-bucket, video);
   
           if (GetSQSMessageSuccess()==true)
           	GetFaceDetectionResults();
   ```

------
#### [ Java V2 ]

   This code is taken from the AWS Documentation SDK examples GitHub repository. See the full example [here](https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/rekognition/src/main/java/com/example/rekognition/VideoDetectFaces.java).

   ```
   //snippet-start:[rekognition.java2.recognize_video_faces.import]
   import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
   import software.amazon.awssdk.regions.Region;
   import software.amazon.awssdk.services.rekognition.RekognitionClient;
   import software.amazon.awssdk.services.rekognition.model.*;
   import java.util.List;
   //snippet-end:[rekognition.java2.recognize_video_faces.import]
   
   
   /**
   * Before running this Java V2 code example, set up your development environment, including your credentials.
   *
   * For more information, see the following documentation topic:
   *
   * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
   */
   public class VideoDetectFaces {
   
    private static String startJobId ="";
    public static void main(String[] args) {
   
        final String usage = "\n" +
            "Usage: " +
            "   <bucket> <video> <topicArn> <roleArn>\n\n" +
            "Where:\n" +
            "   bucket - The name of the bucket in which the video is located (for example, (for example, amzn-s3-demo-bucket). \n\n"+
            "   video - The name of video (for example, people.mp4). \n\n" +
            "   topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic. \n\n" +
            "   roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use. \n\n" ;
   
        if (args.length != 4) {
            System.out.println(usage);
            System.exit(1);
        }
   
        String bucket = args[0];
        String video = args[1];
        String topicArn = args[2];
        String roleArn = args[3];
   
        Region region = Region.US_EAST_1;
        RekognitionClient rekClient = RekognitionClient.builder()
            .region(region)
            .credentialsProvider(ProfileCredentialsProvider.create("profile-name"))
            .build();
   
        NotificationChannel channel = NotificationChannel.builder()
            .snsTopicArn(topicArn)
            .roleArn(roleArn)
            .build();
   
        StartFaceDetection(rekClient, channel, bucket, video);
        GetFaceResults(rekClient);
        System.out.println("This example is done!");
        rekClient.close();
    }
   
    // snippet-start:[rekognition.java2.recognize_video_faces.main]
    public static void StartFaceDetection(RekognitionClient rekClient,
                                          NotificationChannel channel,
                                          String bucket,
                                          String video) {
   
        try {
            S3Object s3Obj = S3Object.builder()
                .bucket(bucket)
                .name(video)
                .build();
   
            Video vidOb = Video.builder()
                .s3Object(s3Obj)
                .build();
   
            StartFaceDetectionRequest  faceDetectionRequest = StartFaceDetectionRequest.builder()
                .jobTag("Faces")
                .faceAttributes(FaceAttributes.ALL)
                .notificationChannel(channel)
                .video(vidOb)
                .build();
   
            StartFaceDetectionResponse startLabelDetectionResult = rekClient.startFaceDetection(faceDetectionRequest);
            startJobId=startLabelDetectionResult.jobId();
   
        } catch(RekognitionException e) {
            System.out.println(e.getMessage());
            System.exit(1);
        }
    }
   
    public static void GetFaceResults(RekognitionClient rekClient) {
   
        try {
            String paginationToken=null;
            GetFaceDetectionResponse faceDetectionResponse=null;
            boolean finished = false;
            String status;
            int yy=0 ;
   
            do{
                if (faceDetectionResponse !=null)
                    paginationToken = faceDetectionResponse.nextToken();
   
                GetFaceDetectionRequest recognitionRequest = GetFaceDetectionRequest.builder()
                    .jobId(startJobId)
                    .nextToken(paginationToken)
                    .maxResults(10)
                    .build();
   
                // Wait until the job succeeds
                while (!finished) {
   
                    faceDetectionResponse = rekClient.getFaceDetection(recognitionRequest);
                    status = faceDetectionResponse.jobStatusAsString();
   
                    if (status.compareTo("SUCCEEDED") == 0)
                        finished = true;
                    else {
                        System.out.println(yy + " status is: " + status);
                        Thread.sleep(1000);
                    }
                    yy++;
                }
   
                finished = false;
   
                // Proceed when the job is done - otherwise VideoMetadata is null
                VideoMetadata videoMetaData=faceDetectionResponse.videoMetadata();
                System.out.println("Format: " + videoMetaData.format());
                System.out.println("Codec: " + videoMetaData.codec());
                System.out.println("Duration: " + videoMetaData.durationMillis());
                System.out.println("FrameRate: " + videoMetaData.frameRate());
                System.out.println("Job");
   
                // Show face information
                List<FaceDetection> faces= faceDetectionResponse.faces();
   
                for (FaceDetection face: faces) {
                    String age = face.face().ageRange().toString();
                    String smile = face.face().smile().toString();
                    System.out.println("The detected face is estimated to be"
                                + age + " years old.");
                    System.out.println("There is a smile : "+smile);
                }
   
            } while (faceDetectionResponse !=null && faceDetectionResponse.nextToken() != null);
   
        } catch(RekognitionException | InterruptedException e) {
            System.out.println(e.getMessage());
            System.exit(1);
        }
    }
    // snippet-end:[rekognition.java2.recognize_video_faces.main]
   }
   ```

------
#### [ Python ]

   ```
   #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
       # ============== Faces===============
       def StartFaceDetection(self):
           response=self.rek.start_face_detection(Video={'S3Object': {'Bucket': self.bucket, 'Name': self.video}},
               NotificationChannel={'RoleArn': self.roleArn, 'SNSTopicArn': self.snsTopicArn})
   
           self.startJobId=response['JobId']
           print('Start Job Id: ' + self.startJobId)
   
       def GetFaceDetectionResults(self):
           maxResults = 10
           paginationToken = ''
           finished = False
   
           while finished == False:
               response = self.rek.get_face_detection(JobId=self.startJobId,
                                               MaxResults=maxResults,
                                               NextToken=paginationToken)
   
               print('Codec: ' + response['VideoMetadata']['Codec'])
               print('Duration: ' + str(response['VideoMetadata']['DurationMillis']))
               print('Format: ' + response['VideoMetadata']['Format'])
               print('Frame rate: ' + str(response['VideoMetadata']['FrameRate']))
               print()
   
               for faceDetection in response['Faces']:
                   print('Face: ' + str(faceDetection['Face']))
                   print('Confidence: ' + str(faceDetection['Face']['Confidence']))
                   print('Timestamp: ' + str(faceDetection['Timestamp']))
                   print()
   
               if 'NextToken' in response:
                   paginationToken = response['NextToken']
               else:
                   finished = True
   ```

   In the function `main`, replace the lines: 

   ```
       analyzer.StartLabelDetection()
       if analyzer.GetSQSMessageSuccess()==True:
           analyzer.GetLabelDetectionResults()
   ```

   with:

   ```
       analyzer.StartFaceDetection()
       if analyzer.GetSQSMessageSuccess()==True:
           analyzer.GetFaceDetectionResults()
   ```

------
**Note**  
If you've already run a video example other than [Analyzing a video stored in an Amazon S3 bucket with Java or Python (SDK)](video-analyzing-with-sqs.md), the function name to replace is different.

1. Run the code. Information about the faces that were detected in the video is shown.

## GetFaceDetection operation response
<a name="getfacedetection-operation-response"></a>

`GetFaceDetection` returns an array (`Faces`) that contains information about the faces detected in the video. An array element, [FaceDetection](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_FaceDetection.html), exists for each time a face is detected in the video. The array elements returned are sorted by time, in milliseconds since the start of the video. 

The following example is a partial JSON response from `GetFaceDetection`. In the response, note the following:
+ **Bounding box** – The coordinates of the bounding box that surrounds the face.
+ **Confidence** – The level of confidence that the bounding box contains a face.
+ **Facial landmarks** – An array of facial landmarks. For each landmark (such as the left eye, right eye, and mouth), the response provides the `x` and `y` coordinates.
+ **Face attributes** – A set of facial attributes, which includes: AgeRange, Beard, Emotions, Eyeglasses, EyesOpen, Gender, MouthOpen, Mustache, Smile, and Sunglasses. The value can be of different types, such as a Boolean type (whether a person is wearing sunglasses) or a string (whether the person is male or female). In addition, for most attributes, the response also provides a confidence in the detected value for the attribute. Note that while FaceOccluded and EyeDirection attributes are supported when using `DetectFaces`, they aren’t supported when analyzing videos with `StartFaceDetection` and `GetFaceDetection`.
+ **Timestamp** – The time that the face was detected in the video. 
+ **Paging information** – The example shows one page of face detection information. You can specify how many person elements to return in the `MaxResults` input parameter for `GetFaceDetection`. If more results than `MaxResults` exist, `GetFaceDetection` returns a token (`NextToken`) that's used to get the next page of results. For more information, see [Getting Amazon Rekognition Video analysis results](api-video.md#api-video-get).
+ **Video information** – The response includes information about the video format (`VideoMetadata`) in each page of information that's returned by `GetFaceDetection`.
+ **Quality** – Describes the brightness and the sharpness of the face.
+ **Pose** – Describes the rotation of the face.

```
{
    "Faces": [
        {
            "Face": {
                "BoundingBox": {
                    "Height": 0.23000000417232513,
                    "Left": 0.42500001192092896,
                    "Top": 0.16333332657814026,
                    "Width": 0.12937499582767487
                },
                "Confidence": 99.97504425048828,
                "Landmarks": [
                    {
                        "Type": "eyeLeft",
                        "X": 0.46415066719055176,
                        "Y": 0.2572723925113678
                    },
                    {
                        "Type": "eyeRight",
                        "X": 0.5068183541297913,
                        "Y": 0.23705792427062988
                    },
                    {
                        "Type": "nose",
                        "X": 0.49765899777412415,
                        "Y": 0.28383663296699524
                    },
                    {
                        "Type": "mouthLeft",
                        "X": 0.487221896648407,
                        "Y": 0.3452930748462677
                    },
                    {
                        "Type": "mouthRight",
                        "X": 0.5142884850502014,
                        "Y": 0.33167609572410583
                    }
                ],
                "Pose": {
                    "Pitch": 15.966927528381348,
                    "Roll": -15.547388076782227,
                    "Yaw": 11.34195613861084
                },
                "Quality": {
                    "Brightness": 44.80223083496094,
                    "Sharpness": 99.95819854736328
                }
            },
            "Timestamp": 0
        },
        {
            "Face": {
                "BoundingBox": {
                    "Height": 0.20000000298023224,
                    "Left": 0.029999999329447746,
                    "Top": 0.2199999988079071,
                    "Width": 0.11249999701976776
                },
                "Confidence": 99.85971069335938,
                "Landmarks": [
                    {
                        "Type": "eyeLeft",
                        "X": 0.06842322647571564,
                        "Y": 0.3010137975215912
                    },
                    {
                        "Type": "eyeRight",
                        "X": 0.10543643683195114,
                        "Y": 0.29697132110595703
                    },
                    {
                        "Type": "nose",
                        "X": 0.09569807350635529,
                        "Y": 0.33701086044311523
                    },
                    {
                        "Type": "mouthLeft",
                        "X": 0.0732642263174057,
                        "Y": 0.3757539987564087
                    },
                    {
                        "Type": "mouthRight",
                        "X": 0.10589495301246643,
                        "Y": 0.3722417950630188
                    }
                ],
                "Pose": {
                    "Pitch": -0.5589138865470886,
                    "Roll": -5.1093974113464355,
                    "Yaw": 18.69594955444336
                },
                "Quality": {
                    "Brightness": 43.052337646484375,
                    "Sharpness": 99.68138885498047
                }
            },
            "Timestamp": 0
        },
        {
            "Face": {
                "BoundingBox": {
                    "Height": 0.2177777737379074,
                    "Left": 0.7593749761581421,
                    "Top": 0.13333334028720856,
                    "Width": 0.12250000238418579
                },
                "Confidence": 99.63436889648438,
                "Landmarks": [
                    {
                        "Type": "eyeLeft",
                        "X": 0.8005779385566711,
                        "Y": 0.20915353298187256
                    },
                    {
                        "Type": "eyeRight",
                        "X": 0.8391435146331787,
                        "Y": 0.21049551665782928
                    },
                    {
                        "Type": "nose",
                        "X": 0.8191410899162292,
                        "Y": 0.2523227035999298
                    },
                    {
                        "Type": "mouthLeft",
                        "X": 0.8093273043632507,
                        "Y": 0.29053622484207153
                    },
                    {
                        "Type": "mouthRight",
                        "X": 0.8366993069648743,
                        "Y": 0.29101791977882385
                    }
                ],
                "Pose": {
                    "Pitch": 3.165884017944336,
                    "Roll": 1.4182015657424927,
                    "Yaw": -11.151537895202637
                },
                "Quality": {
                    "Brightness": 28.910892486572266,
                    "Sharpness": 97.61507415771484
                }
            },
            "Timestamp": 0
        }.......

    ],
    "JobStatus": "SUCCEEDED",
    "NextToken": "i7fj5XPV/fwviXqz0eag9Ow332Jd5G8ZGWf7hooirD/6V1qFmjKFOQZ6QPWUiqv29HbyuhMNqQ==",
    "VideoMetadata": {
        "Codec": "h264",
        "DurationMillis": 67301,
        "FileExtension": "mp4",
        "Format": "QuickTime / MOV",
        "FrameHeight": 1080,
        "FrameRate": 29.970029830932617,
        "FrameWidth": 1920
    }
}
```