

本文属于机器翻译版本。若本译文内容与英语原文存在差异，则一律以英文原文为准。

# 检测和分析人脸
<a name="faces"></a>

Amazon Rekognition APIs 为您提供可用于检测和分析图像和视频中人脸的功能。本节概述了用于人脸分析的非存储操作。这些操作包括检测人脸标记、分析情绪和比较人脸等功能。

Amazon Rekognition 可以识别人脸标记（例如眼睛位置），检测情绪（例如开心或难过）和其他属性（例如是否戴眼镜、是否存在面部遮挡）。当检测到人脸时，系统会分析人脸属性并返回每个属性的置信度分数。

![\[一位面带微笑的女人戴着墨镜，在开阔的道路上开着一辆老式的黄色汽车。\]](http://docs.aws.amazon.com/zh_cn/rekognition/latest/dg/images/sample-detect-faces.png)


本节包含图像和视频操作的示例。

有关使用 Rekognition 的图像操作的更多信息，请参阅[使用图像](images.md)。

有关使用 Rekognition 的视频操作的更多信息，请参阅[使用存储视频分析操作](video.md)。

请注意，这些操作是非存储操作。您可以使用存储操作和人脸集合来保存图像中检测到的人脸的人脸元数据。稍后，您可以搜索图像和视频中的存储人脸。例如，这使您能够在视频中搜索特定人员。有关更多信息，请参阅 [在集合中搜索人脸](collections.md)。

有关更多信息，请参阅[亚马逊 Rek FAQs ognition](https://aws.amazon.com/rekognition/faqs/) 的 “面孔” 部分。

**注意**  
亚马逊 Rekognition Image 和亚马逊 Rekognition Video 使用的面部检测模型不支持检测角色或非人类实体中的人脸。 cartoon/animated 如果您想检测图像或视频中的卡通人物，建议您使用 Amazon Rekognition Custom Labels。有关更多信息，请参阅 [Amazon Rekognition Custom Labels 开发人员指南](https://docs.aws.amazon.com/rekognition/latest/customlabels-dg/what-is.html)。

**Topics**
+ [人脸检测和人脸比较概述](face-feature-differences.md)
+ [人脸属性指南](guidance-face-attributes.md)
+ [检测图像中的人脸](faces-detect-images.md)
+ [比较图像中的人脸](faces-comparefaces.md)
+ [检测存储视频中的文本](faces-sqs-video.md)

# 人脸检测和人脸比较概述
<a name="face-feature-differences"></a>

Amazon Rekognition 让用户能够访问两个主要的机器学习应用程序来处理包含人脸的图像，分别是人脸检测和人脸比较。它们支持人脸分析和身份验证等关键功能，因此对于从安全到个人照片整理等各种应用都至关重要。

**人脸检测**

人脸检测系统可以解决这样一个问题：“这张图片中是否有人脸？” 人脸检测的关键方面包括：
+ **位置和方向**：确定图像或视频帧中人脸的存在、位置、比例和方向。
+ **人脸属性**：无论属性如何（性别、年龄或面部毛发等），都能检测人脸。
+ **其他信息**：提供有关面部遮挡和眼睛凝视方向的详细信息。

**人脸比较**

人脸比较系统重点解决这样一个问题：“一张图像中的人脸是否与另一张图像中的人脸匹配？” 人脸比较系统的功能包括：
+ **人脸匹配预测**：将图像中的人脸与所提供数据库中的人脸进行比较，以预测匹配结果。
+ **人脸属性处理**：处理属性以比较人脸，无论表情、面部毛发和年龄如何。

**置信度分数和漏检**

人脸检测和人脸比较系统均利用置信度分数。置信度分数表示预测的可能性，例如是否存在人脸或人脸之间是否匹配。分数越高，可能性就越大。例如，90% 的置信度表示正确检测或存在匹配的可能性高于 60%。

如果人脸检测系统无法正确检测人脸，或者对实际人脸提供低置信度预测，则为漏掉的 detection/false 阴性。如果系统在高置信度下错误地预测人脸的存在，则为 alarm/false 误报。

同样，面部比较系统可能无法匹配属于同一个人的两张面孔（漏掉 detection/false 阴性），或者可能错误地预测来自不同人的两张面孔是同一个人（ alarm/false 误报）。

**应用程序设计和阈值设置**
+ 您可以设置一个阈值来指定返回结果所需的最低置信度。选择适当的置信度阈值对于基于系统输出进行应用程序设计和决策至关重要。
+ 所选的置信度应反映您的使用案例。使用案例和置信度阈值的一些示例：
+ 
  + **照片应用程序**：较低阈值（例如 80%）可能足以识别照片中的家庭成员。
  + **高风险场景**：在漏检或误报风险较高的使用案例（例如安全应用程序）中，系统应使用较高的置信度。在这种情况下，建议使用较高阈值（例如 99%），以实现精确的人脸匹配。

有关设置和了解置信度阈值的更多信息，请参阅[在集合中搜索人脸](collections.md)。

# 人脸属性指南
<a name="guidance-face-attributes"></a>

以下是有关 Amazon Rekognition 如何处理和返回人脸属性的详细信息。
+ **FaceDetail 对象**：对于每张检测到的人脸，都会返回一个 FaceDetail 对象。 FaceDetail 其中包含有关面部标志、质量、姿势等的数据。
+ **属性预测**：系统将预测情绪、性别、年龄等属性。系统会为每个预测分配一个置信度，并返回预测以及相应的置信度分数。对于敏感使用案例，建议使用 99% 的置信度阈值。对于年龄估算，预测年龄范围的中点提供了最佳的近似值。

请注意，性别和情绪预测基于外表，不应该用于确定实际的性别认同或情绪状态。性别二进制（男性/女性）预测基于特定图像中人脸的物理外观。它并不表示一个人的性别认同，因此您不应该使用 Rekognition 来做出这样的决定。我们不建议使用性别二进制预测来做出会影响个人权利、隐私或服务访问权限的决策。同样，情绪预测并不表示一个人的实际内在情绪状态，因此您不应该使用 Rekognition 来做出这样的决定。在图片中假装露出笑脸的人可能看起来很快乐，但可能没有感到幸福。

**应用和使用案例**

以下是这些属性的一些实际应用和使用案例：
+ **应用**：微笑、姿势和锐度等属性可用于匿名选择个人资料照片或估算人口统计数据。
+ **常见使用案例**：社交媒体应用以及活动或零售店中的人口统计数据估算就是典型的例子。

有关每个属性的更多详细信息，请参阅[FaceDetail](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_FaceDetail.html)。

# 检测图像中的人脸
<a name="faces-detect-images"></a>

Amazon Rekognition Image [DetectFaces](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DetectFaces.html)提供的操作可查找关键面部特征，例如眼睛、鼻子和嘴巴，以检测输入图像中的人脸。Amazon Rekognition Image 可检测图像中的 100 张最大人脸。

您可以提供输入图像作为图像字节数组（base64 编码的图像字节）或指定 Amazon S3 对象。在此过程中，您将图像（JPEG 或 PNG）上传到您的 S3 存储桶并指定对象键名称。

**检测图像中的人脸**

1. 如果您尚未执行以下操作，请：

   1. 使用 `AmazonRekognitionFullAccess` 和 `AmazonS3ReadOnlyAccess` 权限创建或更新用户。有关更多信息，请参阅 [步骤 1：设置 AWS 账户并创建用户](setting-up.md#setting-up-iam)。

   1. 安装并配置 AWS CLI 和 AWS SDKs。有关更多信息，请参阅 [第 2 步：设置 AWS CLI 和 AWS SDKs](setup-awscli-sdk.md)。

1. 将包含一张或多张人脸的图像上传到您的 S3 存储桶。

   有关说明，请参阅《Amazon Simple Storage Service 用户指南》中的[将对象上传到 Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UploadingObjectsintoAmazonS3.html)。**

1. 使用以下示例调用 `DetectFaces`。

------
#### [ Java ]

   此示例显示检测到的人脸的估计年龄范围，并列出所有检测到的人脸属性的 JSON。将 `photo` 的值更改为图像文件名。将`amzn-s3-demo-bucket`的值更改为存储图像的 Amazon S3 存储桶。

   ```
   //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
   package aws.example.rekognition.image;
   
   import com.amazonaws.services.rekognition.AmazonRekognition;
   import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;
   import com.amazonaws.services.rekognition.model.AmazonRekognitionException;
   import com.amazonaws.services.rekognition.model.Image;
   import com.amazonaws.services.rekognition.model.S3Object;
   import com.amazonaws.services.rekognition.model.AgeRange;
   import com.amazonaws.services.rekognition.model.Attribute;
   import com.amazonaws.services.rekognition.model.DetectFacesRequest;
   import com.amazonaws.services.rekognition.model.DetectFacesResult;
   import com.amazonaws.services.rekognition.model.FaceDetail;
   import com.fasterxml.jackson.databind.ObjectMapper;
   import java.util.List;
   
   
   public class DetectFaces {
      
      
      public static void main(String[] args) throws Exception {
   
         String photo = "input.jpg";
         String bucket = "bucket";
   
         AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient();
   
   
         DetectFacesRequest request = new DetectFacesRequest()
            .withImage(new Image()
               .withS3Object(new S3Object()
                  .withName(photo)
                  .withBucket(bucket)))
            .withAttributes(Attribute.ALL);
         // Replace Attribute.ALL with Attribute.DEFAULT to get default values.
   
         try {
            DetectFacesResult result = rekognitionClient.detectFaces(request);
            List < FaceDetail > faceDetails = result.getFaceDetails();
   
            for (FaceDetail face: faceDetails) {
               if (request.getAttributes().contains("ALL")) {
                  AgeRange ageRange = face.getAgeRange();
                  System.out.println("The detected face is estimated to be between "
                     + ageRange.getLow().toString() + " and " + ageRange.getHigh().toString()
                     + " years old.");
                  System.out.println("Here's the complete set of attributes:");
               } else { // non-default attributes have null values.
                  System.out.println("Here's the default set of attributes:");
               }
   
               ObjectMapper objectMapper = new ObjectMapper();
               System.out.println(objectMapper.writerWithDefaultPrettyPrinter().writeValueAsString(face));
            }
   
         } catch (AmazonRekognitionException e) {
            e.printStackTrace();
         }
   
      }
   
   }
   ```

------
#### [ Java V2 ]

   此代码取自 AWS 文档 SDK 示例 GitHub 存储库。请在[此处](https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/rekognition/src/main/java/com/example/rekognition/DetectFaces.java)查看完整示例。

   ```
   import java.util.List;
   
   //snippet-start:[rekognition.java2.detect_labels.import]
   import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
   import software.amazon.awssdk.regions.Region;
   import software.amazon.awssdk.services.rekognition.RekognitionClient;
   import software.amazon.awssdk.services.rekognition.model.RekognitionException;
   import software.amazon.awssdk.services.rekognition.model.S3Object;
   import software.amazon.awssdk.services.rekognition.model.DetectFacesRequest;
   import software.amazon.awssdk.services.rekognition.model.DetectFacesResponse;
   import software.amazon.awssdk.services.rekognition.model.Image;
   import software.amazon.awssdk.services.rekognition.model.Attribute;
   import software.amazon.awssdk.services.rekognition.model.FaceDetail;
   import software.amazon.awssdk.services.rekognition.model.AgeRange;
   
   //snippet-end:[rekognition.java2.detect_labels.import]
   
   public class DetectFaces {
   
       public static void main(String[] args) {
           final String usage = "\n" +
               "Usage: " +
               "   <bucket> <image>\n\n" +
               "Where:\n" +
               "   bucket - The name of the Amazon S3 bucket that contains the image (for example, ,amzn-s3-demo-bucket)." +
               "   image - The name of the image located in the Amazon S3 bucket (for example, Lake.png). \n\n";
   
           if (args.length != 2) {
               System.out.println(usage);
               System.exit(1);
           }
   
           String bucket = args[0];
           String image = args[1];
           Region region = Region.US_WEST_2;
           RekognitionClient rekClient = RekognitionClient.builder()
               .region(region)
               .credentialsProvider(ProfileCredentialsProvider.create("profile-name"))
               .build();
   
           getLabelsfromImage(rekClient, bucket, image);
           rekClient.close();
       }
   
       // snippet-start:[rekognition.java2.detect_labels_s3.main]
       public static void getLabelsfromImage(RekognitionClient rekClient, String bucket, String image) {
   
           try {
               S3Object s3Object = S3Object.builder()
                   .bucket(bucket)
                   .name(image)
                   .build() ;
   
               Image myImage = Image.builder()
                   .s3Object(s3Object)
                   .build();
   
               DetectFacesRequest facesRequest = DetectFacesRequest.builder()
                       .attributes(Attribute.ALL)
                       .image(myImage)
                       .build();
   
                   DetectFacesResponse facesResponse = rekClient.detectFaces(facesRequest);
                   List<FaceDetail> faceDetails = facesResponse.faceDetails();
                   for (FaceDetail face : faceDetails) {
                       AgeRange ageRange = face.ageRange();
                       System.out.println("The detected face is estimated to be between "
                                   + ageRange.low().toString() + " and " + ageRange.high().toString()
                                   + " years old.");
   
                       System.out.println("There is a smile : "+face.smile().value().toString());
                   }
   
           } catch (RekognitionException e) {
               System.out.println(e.getMessage());
               System.exit(1);
           }
       }
    // snippet-end:[rekognition.java2.detect_labels.main]
   }
   ```

------
#### [ AWS CLI ]

   此示例显示`detect-faces` AWS CLI 操作的 JSON 输出。将 `file` 替换为图像文件的名称。将`amzn-s3-demo-bucket`替换为包含图像文件的 Amazon S3 存储桶的名称。

   ```
   aws rekognition detect-faces --image "{"S3Object":{"Bucket":"amzn-s3-demo-bucket,"Name":"image-name"}}"\
                                --attributes "ALL" --profile profile-name --region region-name
   ```

    如果您在 Windows 设备上访问 CLI，请使用双引号代替单引号，并用反斜杠（即 \$1）对内部双引号进行转义，以解决可能遇到的任何解析器错误。例如，请参阅以下内容：

   ```
   aws rekognition detect-faces --image "{\"S3Object\":{\"Bucket\":\"amzn-s3-demo-bucket\",\"Name\":\"image-name\"}}" --attributes "ALL" 
   --profile profile-name --region region-name
   ```

------
#### [ Python ]

   此示例显示检测到的人脸的估计年龄范围和其他属性，并列出所有检测到的人脸属性的 JSON。将 `photo` 的值更改为图像文件名。将`amzn-s3-demo-bucket`的值更改为存储图像的 Amazon S3 存储桶。将创建 Rekognition 会话的行中的`profile_name`值替换为您的开发人员资料的名称。

   ```
   import boto3
   import json
   
   def detect_faces(photo, bucket, region):
       
       session = boto3.Session(profile_name='profile-name',
                               region_name=region)
       client = session.client('rekognition', region_name=region)
   
       response = client.detect_faces(Image={'S3Object':{'Bucket':bucket,'Name':photo}},
                                      Attributes=['ALL'])
   
       print('Detected faces for ' + photo)
       for faceDetail in response['FaceDetails']:
           print('The detected face is between ' + str(faceDetail['AgeRange']['Low'])
                 + ' and ' + str(faceDetail['AgeRange']['High']) + ' years old')
   
           print('Here are the other attributes:')
           print(json.dumps(faceDetail, indent=4, sort_keys=True))
   
           # Access predictions for individual face details and print them
           print("Gender: " + str(faceDetail['Gender']))
           print("Smile: " + str(faceDetail['Smile']))
           print("Eyeglasses: " + str(faceDetail['Eyeglasses']))
           print("Face Occluded: " + str(faceDetail['FaceOccluded']))
           print("Emotions: " + str(faceDetail['Emotions'][0]))
   
       return len(response['FaceDetails'])
       
   def main():
       photo='photo'
       bucket='amzn-s3-demo-bucket'
       region='region'
       face_count=detect_faces(photo, bucket, region)
       print("Faces detected: " + str(face_count))
   
   if __name__ == "__main__":
       main()
   ```

------
#### [ .NET ]

   此示例显示检测到的人脸的估计年龄范围，并列出所有检测到的人脸属性的 JSON。将 `photo` 的值更改为图像文件名。将`amzn-s3-demo-bucket`的值更改为存储图像的 Amazon S3 存储桶。

   ```
   //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
   using System;
   using System.Collections.Generic;
   using Amazon.Rekognition;
   using Amazon.Rekognition.Model;
   
   public class DetectFaces
   {
       public static void Example()
       {
           String photo = "input.jpg";
           String bucket = "amzn-s3-demo-bucket";
   
           AmazonRekognitionClient rekognitionClient = new AmazonRekognitionClient();
   
           DetectFacesRequest detectFacesRequest = new DetectFacesRequest()
           {
               Image = new Image()
               {
                   S3Object = new S3Object()
                   {
                       Name = photo,
                       Bucket = bucket
                   },
               },
               // Attributes can be "ALL" or "DEFAULT". 
               // "DEFAULT": BoundingBox, Confidence, Landmarks, Pose, and Quality.
               // "ALL": See https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Rekognition/TFaceDetail.html
               Attributes = new List<String>() { "ALL" }
           };
   
           try
           {
               DetectFacesResponse detectFacesResponse = rekognitionClient.DetectFaces(detectFacesRequest);
               bool hasAll = detectFacesRequest.Attributes.Contains("ALL");
               foreach(FaceDetail face in detectFacesResponse.FaceDetails)
               {
                   Console.WriteLine("BoundingBox: top={0} left={1} width={2} height={3}", face.BoundingBox.Left,
                       face.BoundingBox.Top, face.BoundingBox.Width, face.BoundingBox.Height);
                   Console.WriteLine("Confidence: {0}\nLandmarks: {1}\nPose: pitch={2} roll={3} yaw={4}\nQuality: {5}",
                       face.Confidence, face.Landmarks.Count, face.Pose.Pitch,
                       face.Pose.Roll, face.Pose.Yaw, face.Quality);
                   if (hasAll)
                       Console.WriteLine("The detected face is estimated to be between " +
                           face.AgeRange.Low + " and " + face.AgeRange.High + " years old.");
               }
           }
           catch (Exception e)
           {
               Console.WriteLine(e.Message);
           }
       }
   }
   ```

------
#### [ Ruby ]

   此示例显示检测到的人脸的估计年龄范围，并列出各种人脸属性。将 `photo` 的值更改为图像文件名。将`amzn-s3-demo-bucket`的值更改为存储图像的 Amazon S3 存储桶。

   ```
      # Add to your Gemfile
      # gem 'aws-sdk-rekognition'
      require 'aws-sdk-rekognition'
      credentials = Aws::Credentials.new(
         ENV['AWS_ACCESS_KEY_ID'],
         ENV['AWS_SECRET_ACCESS_KEY']
      )
      bucket = 'bucket' # the bucketname without s3://
      photo  = 'input.jpg'# the name of file
      client   = Aws::Rekognition::Client.new credentials: credentials
      attrs = {
        image: {
          s3_object: {
            bucket: bucket,
            name: photo
          },
        },
        attributes: ['ALL']
      }
      response = client.detect_faces attrs
      puts "Detected faces for: #{photo}"
      response.face_details.each do |face_detail|
        low  = face_detail.age_range.low
        high = face_detail.age_range.high
        puts "The detected face is between: #{low} and #{high} years old"
        puts "All other attributes:"
        puts "  bounding_box.width:     #{face_detail.bounding_box.width}"
        puts "  bounding_box.height:    #{face_detail.bounding_box.height}"
        puts "  bounding_box.left:      #{face_detail.bounding_box.left}"
        puts "  bounding_box.top:       #{face_detail.bounding_box.top}"
        puts "  age.range.low:          #{face_detail.age_range.low}"
        puts "  age.range.high:         #{face_detail.age_range.high}"
        puts "  smile.value:            #{face_detail.smile.value}"
        puts "  smile.confidence:       #{face_detail.smile.confidence}"
        puts "  eyeglasses.value:       #{face_detail.eyeglasses.value}"
        puts "  eyeglasses.confidence:  #{face_detail.eyeglasses.confidence}"
        puts "  sunglasses.value:       #{face_detail.sunglasses.value}"
        puts "  sunglasses.confidence:  #{face_detail.sunglasses.confidence}"
        puts "  gender.value:           #{face_detail.gender.value}"
        puts "  gender.confidence:      #{face_detail.gender.confidence}"
        puts "  beard.value:            #{face_detail.beard.value}"
        puts "  beard.confidence:       #{face_detail.beard.confidence}"
        puts "  mustache.value:         #{face_detail.mustache.value}"
        puts "  mustache.confidence:    #{face_detail.mustache.confidence}"
        puts "  eyes_open.value:        #{face_detail.eyes_open.value}"
        puts "  eyes_open.confidence:   #{face_detail.eyes_open.confidence}"
        puts "  mout_open.value:        #{face_detail.mouth_open.value}"
        puts "  mout_open.confidence:   #{face_detail.mouth_open.confidence}"
        puts "  emotions[0].type:       #{face_detail.emotions[0].type}"
        puts "  emotions[0].confidence: #{face_detail.emotions[0].confidence}"
        puts "  landmarks[0].type:      #{face_detail.landmarks[0].type}"
        puts "  landmarks[0].x:         #{face_detail.landmarks[0].x}"
        puts "  landmarks[0].y:         #{face_detail.landmarks[0].y}"
        puts "  pose.roll:              #{face_detail.pose.roll}"
        puts "  pose.yaw:               #{face_detail.pose.yaw}"
        puts "  pose.pitch:             #{face_detail.pose.pitch}"
        puts "  quality.brightness:     #{face_detail.quality.brightness}"
        puts "  quality.sharpness:      #{face_detail.quality.sharpness}"
        puts "  confidence:             #{face_detail.confidence}"
        puts "------------"
        puts ""
      end
   ```

------
#### [ Node.js ]

   此示例显示检测到的人脸的估计年龄范围，并列出各种人脸属性。将 `photo` 的值更改为图像文件名。将`amzn-s3-demo-bucket`的值更改为存储图像的 Amazon S3 存储桶。

    将创建 Rekognition 会话的行中的`profile_name`值替换为您的开发人员资料的名称。

   如果您使用的是 TypeScript 定义，则可能需要使用`import AWS from 'aws-sdk'`而不是`const AWS = require('aws-sdk')`，以便使用 Node.js 运行该程序。您可以查阅[适用于 JavaScript 的AWS SDK](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/)，了解更多详情。根据您的配置设置方式，您可能还需要使用 `AWS.config.update({region:region});` 来指定您的区域。

   ```
   
   
   // Load the SDK
   var AWS = require('aws-sdk');
   const bucket = 'bucket-name' // the bucketname without s3://
   const photo  = 'photo-name' // the name of file
   
   var credentials = new AWS.SharedIniFileCredentials({profile: 'profile-name'});
   AWS.config.credentials = credentials;
   AWS.config.update({region:'region-name'});
   
   const client = new AWS.Rekognition();
   const params = {
     Image: {
       S3Object: {
         Bucket: bucket,
         Name: photo
       },
     },
     Attributes: ['ALL']
   }
   
   client.detectFaces(params, function(err, response) {
       if (err) {
         console.log(err, err.stack); // an error occurred
       } else {
         console.log(`Detected faces for: ${photo}`)
         response.FaceDetails.forEach(data => {
           let low  = data.AgeRange.Low
           let high = data.AgeRange.High
           console.log(`The detected face is between: ${low} and ${high} years old`)
           console.log("All other attributes:")
           console.log(`  BoundingBox.Width:      ${data.BoundingBox.Width}`)
           console.log(`  BoundingBox.Height:     ${data.BoundingBox.Height}`)
           console.log(`  BoundingBox.Left:       ${data.BoundingBox.Left}`)
           console.log(`  BoundingBox.Top:        ${data.BoundingBox.Top}`)
           console.log(`  Age.Range.Low:          ${data.AgeRange.Low}`)
           console.log(`  Age.Range.High:         ${data.AgeRange.High}`)
           console.log(`  Smile.Value:            ${data.Smile.Value}`)
           console.log(`  Smile.Confidence:       ${data.Smile.Confidence}`)
           console.log(`  Eyeglasses.Value:       ${data.Eyeglasses.Value}`)
           console.log(`  Eyeglasses.Confidence:  ${data.Eyeglasses.Confidence}`)
           console.log(`  Sunglasses.Value:       ${data.Sunglasses.Value}`)
           console.log(`  Sunglasses.Confidence:  ${data.Sunglasses.Confidence}`)
           console.log(`  Gender.Value:           ${data.Gender.Value}`)
           console.log(`  Gender.Confidence:      ${data.Gender.Confidence}`)
           console.log(`  Beard.Value:            ${data.Beard.Value}`)
           console.log(`  Beard.Confidence:       ${data.Beard.Confidence}`)
           console.log(`  Mustache.Value:         ${data.Mustache.Value}`)
           console.log(`  Mustache.Confidence:    ${data.Mustache.Confidence}`)
           console.log(`  EyesOpen.Value:         ${data.EyesOpen.Value}`)
           console.log(`  EyesOpen.Confidence:    ${data.EyesOpen.Confidence}`)
           console.log(`  MouthOpen.Value:        ${data.MouthOpen.Value}`)
           console.log(`  MouthOpen.Confidence:   ${data.MouthOpen.Confidence}`)
           console.log(`  Emotions[0].Type:       ${data.Emotions[0].Type}`)
           console.log(`  Emotions[0].Confidence: ${data.Emotions[0].Confidence}`)
           console.log(`  Landmarks[0].Type:      ${data.Landmarks[0].Type}`)
           console.log(`  Landmarks[0].X:         ${data.Landmarks[0].X}`)
           console.log(`  Landmarks[0].Y:         ${data.Landmarks[0].Y}`)
           console.log(`  Pose.Roll:              ${data.Pose.Roll}`)
           console.log(`  Pose.Yaw:               ${data.Pose.Yaw}`)
           console.log(`  Pose.Pitch:             ${data.Pose.Pitch}`)
           console.log(`  Quality.Brightness:     ${data.Quality.Brightness}`)
           console.log(`  Quality.Sharpness:      ${data.Quality.Sharpness}`)
           console.log(`  Confidence:             ${data.Confidence}`)
           console.log("------------")
           console.log("")
         }) // for response.faceDetails
       } // if
     });
   ```

------

## DetectFaces 操作请求
<a name="detectfaces-request"></a>

对 `DetectFaces` 的输入是一个图像。在此示例中，图像从 Amazon S3 存储桶加载。`Attributes` 参数指定应返回所有人脸属性。有关更多信息，请参阅 [使用图像](images.md)。

```
{
    "Image": {
        "S3Object": {
            "Bucket": "amzn-s3-demo-bucket",
            "Name": "input.jpg"
        }
    },
    "Attributes": [
        "ALL"
    ]
}
```

## DetectFaces 操作响应
<a name="detectfaces-response"></a>

 `DetectFaces` 将返回每个检测到的人脸的以下信息：


+ **边界框** – 人脸周围的边界框的坐标。
+ **置信度** - 边界框包含人脸的置信度级别。
+ **人脸标记** - 一组人脸标记。对于每个标记（例如，左眼、右眼和嘴），此响应将提供 x 坐标和 y 坐标。
+ **面部属性** - 一组面部属性，例如脸部是否被遮挡，作为 `FaceDetail` 对象返回。该套装包括： AgeRange、Beard、Emotions、 EyeDirection、Eyeglasses、、 EyesOpen、Gend FaceOccluded er、、Mustache MouthOpen、Smile 和太阳镜。对于每个此类属性，此响应将提供一个值。该值可以是不同的类型，例如布尔类型（无论用户是否戴墨镜）、字符串（无论该人是男性还是女性）或角度值（视线方向）。 pitch/yaw 此外，对于大多数属性，此响应还为属性提供检测到的值的置信度。请注意，虽然使用时支持 FaceOccluded 和 EyeDirection 属性`DetectFaces`，但使用`StartFaceDetection`和分析视频时不支持这些属性`GetFaceDetection`。
+ **质量** - 描述人脸的亮度和锐度。有关确保尽可能最佳的人脸检测的信息，请参阅[有关面部比较输入图像的建议](recommendations-facial-input-images.md)。
+ **姿势** - 描述图像中的人脸的旋转。

该请求可以描绘您想要返回的面部属性数组。将始终返回面部属性的`DEFAULT`子集 - `BoundingBox`、`Confidence`、`Pose`、`Quality` 和 `Landmarks`。您可以使用`["DEFAULT", "FACE_OCCLUDED", "EYE_DIRECTION"]`或只使用一个属性（比如`["FACE_OCCLUDED"]`）来请求返回特定的面部属性（除了默认列表之外）。您可以使用`["ALL"]`请求所有面部属性。请求更多属性可能会增加响应时间。

下面是 `DetectFaces` API 调用的示例响应：

```
{
  "FaceDetails": [
    {
      "BoundingBox": {
        "Width": 0.7919622659683228,
        "Height": 0.7510867118835449,
        "Left": 0.08881539851427078,
        "Top": 0.151064932346344
      },
      "AgeRange": {
        "Low": 18,
        "High": 26
      },
      "Smile": {
        "Value": false,
        "Confidence": 89.77348327636719
      },
      "Eyeglasses": {
        "Value": true,
        "Confidence": 99.99996948242188
      },
      "Sunglasses": {
        "Value": true,
        "Confidence": 93.65237426757812
      },
      "Gender": {
        "Value": "Female",
        "Confidence": 99.85968780517578
      },
      "Beard": {
        "Value": false,
        "Confidence": 77.52591705322266
      },
      "Mustache": {
        "Value": false,
        "Confidence": 94.48904418945312
      },
      "EyesOpen": {
        "Value": true,
        "Confidence": 98.57169342041016
      },
      "MouthOpen": {
        "Value": false,
        "Confidence": 74.33953094482422
      },
      "Emotions": [
        {
          "Type": "SAD",
          "Confidence": 65.56403350830078
        },
        {
          "Type": "CONFUSED",
          "Confidence": 31.277774810791016
        },
        {
          "Type": "DISGUSTED",
          "Confidence": 15.553778648376465
        },
        {
          "Type": "ANGRY",
          "Confidence": 8.012762069702148
        },
        {
          "Type": "SURPRISED",
          "Confidence": 7.621500015258789
        },
        {
          "Type": "FEAR",
          "Confidence": 7.243380546569824
        },
        {
          "Type": "CALM",
          "Confidence": 5.8196024894714355
        },
        {
          "Type": "HAPPY",
          "Confidence": 2.2830512523651123
        }
      ],
      "Landmarks": [
        {
          "Type": "eyeLeft",
          "X": 0.30225440859794617,
          "Y": 0.41018882393836975
        },
        {
          "Type": "eyeRight",
          "X": 0.6439348459243774,
          "Y": 0.40341562032699585
        },
        {
          "Type": "mouthLeft",
          "X": 0.343580037355423,
          "Y": 0.6951127648353577
        },
        {
          "Type": "mouthRight",
          "X": 0.6306480765342712,
          "Y": 0.6898072361946106
        },
        {
          "Type": "nose",
          "X": 0.47164231538772583,
          "Y": 0.5763645172119141
        },
        {
          "Type": "leftEyeBrowLeft",
          "X": 0.1732882857322693,
          "Y": 0.34452149271965027
        },
        {
          "Type": "leftEyeBrowRight",
          "X": 0.3655243515968323,
          "Y": 0.33231860399246216
        },
        {
          "Type": "leftEyeBrowUp",
          "X": 0.2671719491481781,
          "Y": 0.31669262051582336
        },
        {
          "Type": "rightEyeBrowLeft",
          "X": 0.5613729953765869,
          "Y": 0.32813435792922974
        },
        {
          "Type": "rightEyeBrowRight",
          "X": 0.7665090560913086,
          "Y": 0.3318614959716797
        },
        {
          "Type": "rightEyeBrowUp",
          "X": 0.6612788438796997,
          "Y": 0.3082450032234192
        },
        {
          "Type": "leftEyeLeft",
          "X": 0.2416982799768448,
          "Y": 0.4085965156555176
        },
        {
          "Type": "leftEyeRight",
          "X": 0.36943578720092773,
          "Y": 0.41230902075767517
        },
        {
          "Type": "leftEyeUp",
          "X": 0.29974061250686646,
          "Y": 0.3971870541572571
        },
        {
          "Type": "leftEyeDown",
          "X": 0.30360740423202515,
          "Y": 0.42347756028175354
        },
        {
          "Type": "rightEyeLeft",
          "X": 0.5755768418312073,
          "Y": 0.4081145226955414
        },
        {
          "Type": "rightEyeRight",
          "X": 0.7050536870956421,
          "Y": 0.39924031496047974
        },
        {
          "Type": "rightEyeUp",
          "X": 0.642906129360199,
          "Y": 0.39026668667793274
        },
        {
          "Type": "rightEyeDown",
          "X": 0.6423097848892212,
          "Y": 0.41669243574142456
        },
        {
          "Type": "noseLeft",
          "X": 0.4122826159000397,
          "Y": 0.5987403392791748
        },
        {
          "Type": "noseRight",
          "X": 0.5394935011863708,
          "Y": 0.5960900187492371
        },
        {
          "Type": "mouthUp",
          "X": 0.478581964969635,
          "Y": 0.6660456657409668
        },
        {
          "Type": "mouthDown",
          "X": 0.483366996049881,
          "Y": 0.7497162818908691
        },
        {
          "Type": "leftPupil",
          "X": 0.30225440859794617,
          "Y": 0.41018882393836975
        },
        {
          "Type": "rightPupil",
          "X": 0.6439348459243774,
          "Y": 0.40341562032699585
        },
        {
          "Type": "upperJawlineLeft",
          "X": 0.11031254380941391,
          "Y": 0.3980775475502014
        },
        {
          "Type": "midJawlineLeft",
          "X": 0.19301874935626984,
          "Y": 0.7034031748771667
        },
        {
          "Type": "chinBottom",
          "X": 0.4939905107021332,
          "Y": 0.8877836465835571
        },
        {
          "Type": "midJawlineRight",
          "X": 0.7990140914916992,
          "Y": 0.6899225115776062
        },
        {
          "Type": "upperJawlineRight",
          "X": 0.8548634648323059,
          "Y": 0.38160091638565063
        }
      ],
      "Pose": {
        "Roll": -5.83309268951416,
        "Yaw": -2.4244730472564697,
        "Pitch": 2.6216139793395996
      },
      "Quality": {
        "Brightness": 96.16363525390625,
        "Sharpness": 95.51618957519531
      },
      "Confidence": 99.99872589111328,
      "FaceOccluded": {
        "Value": true,
        "Confidence": 99.99726104736328
      },
      "EyeDirection": {
        "Yaw": 16.299732,
        "Pitch": -6.407457,
        "Confidence": 99.968704
      }
    }
  ],
  "ResponseMetadata": {
    "RequestId": "8bf02607-70b7-4f20-be55-473fe1bba9a2",
    "HTTPStatusCode": 200,
    "HTTPHeaders": {
      "x-amzn-requestid": "8bf02607-70b7-4f20-be55-473fe1bba9a2",
      "content-type": "application/x-amz-json-1.1",
      "content-length": "3409",
      "date": "Wed, 26 Apr 2023 20:18:50 GMT"
    },
    "RetryAttempts": 0
  }
}
```

注意以下几点：
+ `Pose` 数据描述了检测到的人脸的旋转。您可以使用 `BoundingBox` 和 `Pose` 数据的组合，在应用程序显示的人脸的四周绘制边界框。
+ `Quality` 描述人脸的亮度和锐度。您可能会发现这对比较图像之间的人脸和查找最佳人脸会很有用。
+ 前面的响应显示服务可检测的所有人脸 `landmarks`、所有人脸属性和情绪。要在响应中获取所有这些项，您必须指定 `attributes` 参数与值 `ALL`。默认情况下，`DetectFaces` API 仅返回以下 5 个人脸属性：`BoundingBox`、`Confidence`、`Pose`、`Quality` 和 `landmarks`。返回的默认标记为：`eyeLeft`、`eyeRight`、`nose`、`mouthLeft` 和 `mouthRight`。

  

# 比较图像中的人脸
<a name="faces-comparefaces"></a>

使用 Rekognition，您可以使用该操作比较两张图像之间的面孔。[CompareFaces](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_CompareFaces.html)此特征对于身份验证或照片匹配等应用非常有用。

CompareFaces 将*源*图像中的人脸与*目标*图像中的每张人脸进行比较。图像通过以下 CompareFaces 任一方式传递给：
+ 图像的 Base64 编码表示。
+ Amazon S3 对象。

**人脸检测与人脸比较**

人脸比较不同于人脸检测。人脸检测（使用 DetectFaces）仅识别图像或视频中人脸的存在和位置。相比之下，人脸比较涉及将源图像中检测到的人脸与目标图像中的人脸进行比较以找到匹配项。

**相似度阈值**

使用 `similarityThreshold` 参数定义响应中要包含的匹配项的最低置信度。默认情况下，响应中只返回相似度得分大于或等于 80% 的人脸。

**注意**  
`CompareFaces` 使用机器学习算法，这些算法是概率性的。假阴性是一种错误的预测，即与源图像中的人脸相比，目标图像中的人脸具有较低的相似性置信度得分。为了降低假阴性的可能性，我们建议您将目标图像与多个源图像进行比较。如果您计划使用 `CompareFaces` 来做出影响个人权利、隐私或服务访问权限的决定，我们建议您在采取行动之前将结果交给人类进行审查和进一步验证。

 

以下代码示例演示了如何使用各种 CompareFaces 操作 AWS SDKs。在 AWS CLI 示例中，您将两张 JPEG 图像上传到您的 Amazon S3 存储桶并指定对象密钥名称。在其他示例中，您从本地文件系统中加载两个文件并将它们作为图像字节数组输入。

**比较人脸**

1. 如果您尚未执行以下操作，请：

   1. 创建或更新具有`AmazonRekognitionFullAccess`和`AmazonS3ReadOnlyAccess`（仅限AWS CLI 示例）权限的用户。有关更多信息，请参阅 [步骤 1：设置 AWS 账户并创建用户](setting-up.md#setting-up-iam)。

   1. 安装并配置 AWS CLI 和 AWS SDKs。有关更多信息，请参阅 [第 2 步：设置 AWS CLI 和 AWS SDKs](setup-awscli-sdk.md)。

1. 使用以下示例代码调用 `CompareFaces` 操作。

------
#### [ Java ]

   此示例显示有关与从本地文件系统加载的源和目标图像中的人脸进行匹配的信息。

   将 `sourceImage` 和 `targetImage` 的值分别替换为源和目标图像的路径和文件名。

   ```
   //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
   package aws.example.rekognition.image;
   import com.amazonaws.services.rekognition.AmazonRekognition;
   import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;
   import com.amazonaws.services.rekognition.model.Image;
   import com.amazonaws.services.rekognition.model.BoundingBox;
   import com.amazonaws.services.rekognition.model.CompareFacesMatch;
   import com.amazonaws.services.rekognition.model.CompareFacesRequest;
   import com.amazonaws.services.rekognition.model.CompareFacesResult;
   import com.amazonaws.services.rekognition.model.ComparedFace;
   import java.util.List;
   import java.io.File;
   import java.io.FileInputStream;
   import java.io.InputStream;
   import java.nio.ByteBuffer;
   import com.amazonaws.util.IOUtils;
   
   public class CompareFaces {
   
      public static void main(String[] args) throws Exception{
          Float similarityThreshold = 70F;
          String sourceImage = "source.jpg";
          String targetImage = "target.jpg";
          ByteBuffer sourceImageBytes=null;
          ByteBuffer targetImageBytes=null;
   
          AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient();
   
          //Load source and target images and create input parameters
          try (InputStream inputStream = new FileInputStream(new File(sourceImage))) {
             sourceImageBytes = ByteBuffer.wrap(IOUtils.toByteArray(inputStream));
          }
          catch(Exception e)
          {
              System.out.println("Failed to load source image " + sourceImage);
              System.exit(1);
          }
          try (InputStream inputStream = new FileInputStream(new File(targetImage))) {
              targetImageBytes = ByteBuffer.wrap(IOUtils.toByteArray(inputStream));
          }
          catch(Exception e)
          {
              System.out.println("Failed to load target images: " + targetImage);
              System.exit(1);
          }
   
          Image source=new Image()
               .withBytes(sourceImageBytes);
          Image target=new Image()
               .withBytes(targetImageBytes);
   
          CompareFacesRequest request = new CompareFacesRequest()
                  .withSourceImage(source)
                  .withTargetImage(target)
                  .withSimilarityThreshold(similarityThreshold);
   
          // Call operation
          CompareFacesResult compareFacesResult=rekognitionClient.compareFaces(request);
   
   
          // Display results
          List <CompareFacesMatch> faceDetails = compareFacesResult.getFaceMatches();
          for (CompareFacesMatch match: faceDetails){
            ComparedFace face= match.getFace();
            BoundingBox position = face.getBoundingBox();
            System.out.println("Face at " + position.getLeft().toString()
                  + " " + position.getTop()
                  + " matches with " + match.getSimilarity().toString()
                  + "% confidence.");
   
          }
          List<ComparedFace> uncompared = compareFacesResult.getUnmatchedFaces();
   
          System.out.println("There was " + uncompared.size()
               + " face(s) that did not match");
      }
   }
   ```

------
#### [ Java V2 ]

   此代码取自 AWS 文档 SDK 示例 GitHub 存储库。请在[此处](https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/rekognition/src/main/java/com/example/rekognition/CompareFaces.java)查看完整示例。

   ```
   import java.util.List;
   
   import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
   import software.amazon.awssdk.regions.Region;
   import software.amazon.awssdk.services.rekognition.RekognitionClient;
   import software.amazon.awssdk.services.rekognition.model.RekognitionException;
   import software.amazon.awssdk.services.rekognition.model.Image;
   import software.amazon.awssdk.services.rekognition.model.BoundingBox;
   import software.amazon.awssdk.services.rekognition.model.CompareFacesMatch;
   import software.amazon.awssdk.services.rekognition.model.CompareFacesRequest;
   import software.amazon.awssdk.services.rekognition.model.CompareFacesResponse;
   import software.amazon.awssdk.services.rekognition.model.ComparedFace;
   import software.amazon.awssdk.core.SdkBytes;
   import java.io.FileInputStream;
   import java.io.FileNotFoundException;
   import java.io.InputStream;
   
   // snippet-end:[rekognition.java2.detect_faces.import]
   
   /**
    * Before running this Java V2 code example, set up your development environment, including your credentials.
    *
    * For more information, see the following documentation topic:
    *
    * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
    */
   public class CompareFaces {
   
       public static void main(String[] args) {
   
           final String usage = "\n" +
               "Usage: " +
               "   <pathSource> <pathTarget>\n\n" +
               "Where:\n" +
               "   pathSource - The path to the source image (for example, C:\\AWS\\pic1.png). \n " +
               "   pathTarget - The path to the target image (for example, C:\\AWS\\pic2.png). \n\n";
   
           if (args.length != 2) {
               System.out.println(usage);
               System.exit(1);
           }
   
           Float similarityThreshold = 70F;
           String sourceImage = args[0];
           String targetImage = args[1];
           Region region = Region.US_EAST_1;
           RekognitionClient rekClient = RekognitionClient.builder()
               .region(region)
               .credentialsProvider(ProfileCredentialsProvider.create("profile-name"))
               .build();
   
           compareTwoFaces(rekClient, similarityThreshold, sourceImage, targetImage);
           rekClient.close();
      }
   
       // snippet-start:[rekognition.java2.compare_faces.main]
       public static void compareTwoFaces(RekognitionClient rekClient, Float similarityThreshold, String sourceImage, String targetImage) {
           try {
               InputStream sourceStream = new FileInputStream(sourceImage);
               InputStream tarStream = new FileInputStream(targetImage);
               SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream);
               SdkBytes targetBytes = SdkBytes.fromInputStream(tarStream);
   
               // Create an Image object for the source image.
               Image souImage = Image.builder()
                   .bytes(sourceBytes)
                   .build();
   
               Image tarImage = Image.builder()
                   .bytes(targetBytes)
                   .build();
   
               CompareFacesRequest facesRequest = CompareFacesRequest.builder()
                   .sourceImage(souImage)
                   .targetImage(tarImage)
                   .similarityThreshold(similarityThreshold)
                   .build();
   
               // Compare the two images.
               CompareFacesResponse compareFacesResult = rekClient.compareFaces(facesRequest);
               List<CompareFacesMatch> faceDetails = compareFacesResult.faceMatches();
               for (CompareFacesMatch match: faceDetails){
                   ComparedFace face= match.face();
                   BoundingBox position = face.boundingBox();
                   System.out.println("Face at " + position.left().toString()
                           + " " + position.top()
                           + " matches with " + face.confidence().toString()
                           + "% confidence.");
   
               }
               List<ComparedFace> uncompared = compareFacesResult.unmatchedFaces();
               System.out.println("There was " + uncompared.size() + " face(s) that did not match");
               System.out.println("Source image rotation: " + compareFacesResult.sourceImageOrientationCorrection());
               System.out.println("target image rotation: " + compareFacesResult.targetImageOrientationCorrection());
   
           } catch(RekognitionException | FileNotFoundException e) {
               System.out.println("Failed to load source image " + sourceImage);
               System.exit(1);
           }
       }
       // snippet-end:[rekognition.java2.compare_faces.main]
   }
   ```

------
#### [ AWS CLI ]

   此示例显示`compare-faces` AWS CLI 操作的 JSON 输出。

   将`amzn-s3-demo-bucket`替换为包含源和目标图像的 Amazon S3 存储桶的名称。将 `source.jpg` 和 `target.jpg` 替换为源和目标图像的文件名。

   ```
   aws rekognition compare-faces --target-image \
   "{"S3Object":{"Bucket":"amzn-s3-demo-bucket","Name":"image-name"}}" \
   --source-image "{"S3Object":{"Bucket":"amzn-s3-demo-bucket","Name":"image-name"}}" 
   --profile profile-name
   ```

    如果您在 Windows 设备上访问 CLI，请使用双引号代替单引号，并用反斜杠（即 \$1）对内部双引号进行转义，以解决可能遇到的任何解析器错误。例如，请参阅以下内容：

   ```
   aws rekognition compare-faces --target-image "{\"S3Object\":{\"Bucket\":\"amzn-s3-demo-bucket\",\"Name\":\"image-name\"}}" \ 
   --source-image "{\"S3Object\":{\"Bucket\":\"amzn-s3-demo-bucket\",\"Name\":\"image-name\"}}" --profile profile-name
   ```

------
#### [ Python ]

   此示例显示有关与从本地文件系统加载的源和目标图像中的人脸进行匹配的信息。

   将 `source_file` 和 `target_file` 的值分别替换为源和目标图像的路径和文件名。将创建 Rekognition 会话的行中的`profile_name`值替换为您的开发人员资料的名称。

   ```
   # Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   # PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
   import boto3
   
   def compare_faces(sourceFile, targetFile):
   
       session = boto3.Session(profile_name='profile-name')
       client = session.client('rekognition')
   
       imageSource = open(sourceFile, 'rb')
       imageTarget = open(targetFile, 'rb')
   
       response = client.compare_faces(SimilarityThreshold=80,
                                       SourceImage={'Bytes': imageSource.read()},
                                       TargetImage={'Bytes': imageTarget.read()})
   
       for faceMatch in response['FaceMatches']:
           position = faceMatch['Face']['BoundingBox']
           similarity = str(faceMatch['Similarity'])
           print('The face at ' +
                 str(position['Left']) + ' ' +
                 str(position['Top']) +
                 ' matches with ' + similarity + '% confidence')
   
       imageSource.close()
       imageTarget.close()
       return len(response['FaceMatches'])
   
   def main():
       source_file = 'source-file-name'
       target_file = 'target-file-name'
       face_matches = compare_faces(source_file, target_file)
       print("Face matches: " + str(face_matches))
   
   if __name__ == "__main__":
       main()
   ```

------
#### [ .NET ]

   此示例显示有关与从本地文件系统加载的源和目标图像中的人脸进行匹配的信息。

   将 `sourceImage` 和 `targetImage` 的值分别替换为源和目标图像的路径和文件名。

   ```
   //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
   using System;
   using System.IO;
   using Amazon.Rekognition;
   using Amazon.Rekognition.Model;
   
   public class CompareFaces
   {
       public static void Example()
       {
           float similarityThreshold = 70F;
           String sourceImage = "source.jpg";
           String targetImage = "target.jpg";
   
           AmazonRekognitionClient rekognitionClient = new AmazonRekognitionClient();
   
           Amazon.Rekognition.Model.Image imageSource = new Amazon.Rekognition.Model.Image();
           try
           {
               using (FileStream fs = new FileStream(sourceImage, FileMode.Open, FileAccess.Read))
               {
                   byte[] data = new byte[fs.Length];
                   fs.Read(data, 0, (int)fs.Length);
                   imageSource.Bytes = new MemoryStream(data);
               }
           }
           catch (Exception)
           {
               Console.WriteLine("Failed to load source image: " + sourceImage);
               return;
           }
   
           Amazon.Rekognition.Model.Image imageTarget = new Amazon.Rekognition.Model.Image();
           try
           {
               using (FileStream fs = new FileStream(targetImage, FileMode.Open, FileAccess.Read))
               {
                   byte[] data = new byte[fs.Length];
                   data = new byte[fs.Length];
                   fs.Read(data, 0, (int)fs.Length);
                   imageTarget.Bytes = new MemoryStream(data);
               }
           }
           catch (Exception)
           {
               Console.WriteLine("Failed to load target image: " + targetImage);
               return;
           }
   
           CompareFacesRequest compareFacesRequest = new CompareFacesRequest()
           {
               SourceImage = imageSource,
               TargetImage = imageTarget,
               SimilarityThreshold = similarityThreshold
           };
   
           // Call operation
           CompareFacesResponse compareFacesResponse = rekognitionClient.CompareFaces(compareFacesRequest);
   
           // Display results
           foreach(CompareFacesMatch match in compareFacesResponse.FaceMatches)
           {
               ComparedFace face = match.Face;
               BoundingBox position = face.BoundingBox;
               Console.WriteLine("Face at " + position.Left
                     + " " + position.Top
                     + " matches with " + match.Similarity
                     + "% confidence.");
           }
   
           Console.WriteLine("There was " + compareFacesResponse.UnmatchedFaces.Count + " face(s) that did not match");
   
       }
   }
   ```

------
#### [ Ruby ]

   此示例显示有关与从本地文件系统加载的源和目标图像中的人脸进行匹配的信息。

   将 `photo_source` 和 `photo_target` 的值分别替换为源和目标图像的路径和文件名。

   ```
     # Add to your Gemfile
      # gem 'aws-sdk-rekognition'
      require 'aws-sdk-rekognition'
      credentials = Aws::Credentials.new(
         ENV['AWS_ACCESS_KEY_ID'],
         ENV['AWS_SECRET_ACCESS_KEY']
      )
      bucket        = 'bucket' # the bucketname without s3://
      photo_source  = 'source.jpg'
      photo_target  = 'target.jpg'
      client   = Aws::Rekognition::Client.new credentials: credentials
      attrs = {
        source_image: {
          s3_object: {
            bucket: bucket,
            name: photo_source
          },
        },
        target_image: {
          s3_object: {
            bucket: bucket,
            name: photo_target
          },
        },
        similarity_threshold: 70
      }
      response = client.compare_faces attrs
      response.face_matches.each do |face_match|
        position   = face_match.face.bounding_box
        similarity = face_match.similarity
        puts "The face at: #{position.left}, #{position.top} matches with #{similarity} % confidence"
      end
   ```

------
#### [ Node.js ]

   此示例显示有关与从本地文件系统加载的源和目标图像中的人脸进行匹配的信息。

   将 `photo_source` 和 `photo_target` 的值分别替换为源和目标图像的路径和文件名。将创建 Rekognition 会话的行中的`profile_name`值替换为您的开发人员资料的名称。

   ```
   // Load the SDK
   var AWS = require('aws-sdk');
   const bucket = 'bucket-name' // the bucket name without s3://
   const photo_source  = 'photo-source-name' // path and the name of file
   const photo_target = 'photo-target-name'
   
   var credentials = new AWS.SharedIniFileCredentials({profile: 'profile-name'});
   AWS.config.credentials = credentials;
   AWS.config.update({region:'region-name'});
   
   const client = new AWS.Rekognition();
      const params = {
        SourceImage: {
          S3Object: {
            Bucket: bucket,
            Name: photo_source
          },
        },
        TargetImage: {
          S3Object: {
            Bucket: bucket,
            Name: photo_target
          },
        },
        SimilarityThreshold: 70
      }
      client.compareFaces(params, function(err, response) {
        if (err) {
          console.log(err, err.stack); // an error occurred
        } else {
          response.FaceMatches.forEach(data => {
            let position   = data.Face.BoundingBox
            let similarity = data.Similarity
            console.log(`The face at: ${position.Left}, ${position.Top} matches with ${similarity} % confidence`)
          }) // for response.faceDetails
        } // if
      });
   ```

------

## CompareFaces 操作请求
<a name="comparefaces-request"></a>

对 `CompareFaces` 的输入是一个图像。在本示例中，源和目标图像是从本地文件系统加载的。`SimilarityThreshold` 输入参数指定所比较的人脸必须匹配以包含在响应中的最小置信度。有关更多信息，请参阅 [使用图像](images.md)。

```
{
    "SourceImage": {
        "Bytes": "/9j/4AAQSk2Q==..."
    },
    "TargetImage": {
        "Bytes": "/9j/4O1Q==..."
    },
    "SimilarityThreshold": 70
}
```

## CompareFaces 操作响应
<a name="comparefaces-response"></a>

响应包括以下内容：
+ 一组匹配的人脸：包含每张匹配人脸的相似度得分和元数据的匹配人脸列表。如果多张人脸匹配，则 `faceMatches`

   数组包括所有人脸匹配。
+ 人脸匹配详细信息：每张匹配的人脸还提供边界框、置信度值、标记位置和相似度得分。
+ 不匹配的人脸列表：响应还包括目标图像中与源图像人脸不匹配的人脸。包括每张不匹配人脸的边界框。
+ 源人脸信息：包含有关用于比较的源图像中的人脸的信息（包括边界框和置信度值）。

该示例显示已在目标图像中找到一个人脸匹配。对于该人脸匹配，它提供了一个边界框和一个置信度值（Amazon Rekognition 在边界框中包含人脸的置信度值）。相似度得分为 99.99 指示人脸的相似程度。该示例也显示 Amazon Rekognition 在目标图像中找到的与源图像中分析的人脸不匹配的一张人脸。

```
{
    "FaceMatches": [{
        "Face": {
            "BoundingBox": {
                "Width": 0.5521978139877319,
                "Top": 0.1203877404332161,
                "Left": 0.23626373708248138,
                "Height": 0.3126954436302185
            },
            "Confidence": 99.98751068115234,
            "Pose": {
                "Yaw": -82.36799621582031,
                "Roll": -62.13221740722656,
                "Pitch": 0.8652129173278809
            },
            "Quality": {
                "Sharpness": 99.99880981445312,
                "Brightness": 54.49755096435547
            },
            "Landmarks": [{
                    "Y": 0.2996366024017334,
                    "X": 0.41685718297958374,
                    "Type": "eyeLeft"
                },
                {
                    "Y": 0.2658946216106415,
                    "X": 0.4414493441581726,
                    "Type": "eyeRight"
                },
                {
                    "Y": 0.3465650677680969,
                    "X": 0.48636093735694885,
                    "Type": "nose"
                },
                {
                    "Y": 0.30935320258140564,
                    "X": 0.6251809000968933,
                    "Type": "mouthLeft"
                },
                {
                    "Y": 0.26942989230155945,
                    "X": 0.6454493403434753,
                    "Type": "mouthRight"
                }
            ]
        },
        "Similarity": 100.0
    }],
    "SourceImageOrientationCorrection": "ROTATE_90",
    "TargetImageOrientationCorrection": "ROTATE_90",
    "UnmatchedFaces": [{
        "BoundingBox": {
            "Width": 0.4890109896659851,
            "Top": 0.6566604375839233,
            "Left": 0.10989011079072952,
            "Height": 0.278298944234848
        },
        "Confidence": 99.99992370605469,
        "Pose": {
            "Yaw": 51.51519012451172,
            "Roll": -110.32493591308594,
            "Pitch": -2.322134017944336
        },
        "Quality": {
            "Sharpness": 99.99671173095703,
            "Brightness": 57.23163986206055
        },
        "Landmarks": [{
                "Y": 0.8288310766220093,
                "X": 0.3133862614631653,
                "Type": "eyeLeft"
            },
            {
                "Y": 0.7632885575294495,
                "X": 0.28091415762901306,
                "Type": "eyeRight"
            },
            {
                "Y": 0.7417283654212952,
                "X": 0.3631140887737274,
                "Type": "nose"
            },
            {
                "Y": 0.8081989884376526,
                "X": 0.48565614223480225,
                "Type": "mouthLeft"
            },
            {
                "Y": 0.7548204660415649,
                "X": 0.46090251207351685,
                "Type": "mouthRight"
            }
        ]
    }],
    "SourceImageFace": {
        "BoundingBox": {
            "Width": 0.5521978139877319,
            "Top": 0.1203877404332161,
            "Left": 0.23626373708248138,
            "Height": 0.3126954436302185
        },
        "Confidence": 99.98751068115234
    }
}
```

# 检测存储视频中的文本
<a name="faces-sqs-video"></a>

Amazon Rekognition Video 可以在 Amazon S3 存储桶存储的视频中检测人脸并提供一些信息，例如：
+ 在视频中检测到人脸的次数。
+ 人脸被检测到时在视频帧中的位置。
+ 人脸标记，例如左眼的位置。
+ 其他属性如[人脸属性指南](guidance-face-attributes.md)页面所述。

存储视频中的 Amazon Rekognition Video 人脸检测是一个异步操作。要开始检测视频中的人脸，请致电[StartFaceDetection](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_StartFaceDetection.html)。Amazon Rekognition Video 会将视频分析的完成状态发布到 Amazon Simple Notification Service (Amazon SNS) 主题。如果视频分析成功，您可以调用 [GetFaceDetection](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_GetFaceDetection.html) 来获取视频分析结果。有关启动视频分析和获取结果的详细信息，请参阅[调用 Amazon Rekognition Video 操作](api-video.md)。

此过程在[使用 Java 或 Python 分析存储在 Amazon S3 存储桶中的视频 (SDK)](video-analyzing-with-sqs.md)（使用 Amazon Simple Queue Service (Amazon SQS) 队列获取视频分析请求的完成状态）中的代码的基础上进行了扩展。

**检测存储在 Amazon S3 存储桶内的视频中的人脸 (SDK)**

1. 执行[使用 Java 或 Python 分析存储在 Amazon S3 存储桶中的视频 (SDK)](video-analyzing-with-sqs.md)。

1. 将以下代码添加到您在步骤 1 中创建的类 `VideoDetect`。

------
#### [ AWS CLI ]
   + 在以下代码示例中，将`amzn-s3-demo-bucket`和`video-name`更改为您在步骤 2 中指定的 Amazon S3 存储桶名称和文件名。
   + 将`region-name`更改为您使用的 AWS 区域。将`profile_name`的值替换为您的开发人员资料的名称。
   + 将 `TopicARN` 更改为您在 [配置 Amazon Rekognition Video](api-video-roles.md) 的步骤 3 中创建的 Amazon SNS 主题的 ARN。
   + 将 `RoleARN` 更改为您在 [配置 Amazon Rekognition Video](api-video-roles.md) 的步骤 7 中创建的 IAM 服务角色的 ARN。

   ```
   aws rekognition start-face-detection --video "{"S3Object":{"Bucket":"amzn-s3-demo-bucket","Name":"Video-Name"}}" --notification-channel
   "{"SNSTopicArn":"Topic-ARN","RoleArn":"Role-ARN"}" --region region-name --profile profile-name
   ```

   如果您在 Windows 设备上访问 CLI，请使用双引号代替单引号，并用反斜杠（即 \$1）对内部双引号进行转义，以解决可能遇到的任何解析器错误。例如，请参阅以下内容：

   ```
   aws rekognition start-face-detection --video "{\"S3Object\":{\"Bucket\":\"amzn-s3-demo-bucket\",\"Name\":\"Video-Name\"}}" --notification-channel
   "{\"SNSTopicArn\":\"Topic-ARN\",\"RoleArn\":\"Role-ARN\"}" --region region-name --profile profile-name
   ```

   运行 `StartFaceDetection` 操作并获取任务 ID 号后，运行以下 `GetFaceDetection` 操作并提供任务 ID 号：

   ```
   aws rekognition get-face-detection --job-id job-id-number  --profile profile-name
   ```

------
#### [ Java ]

   ```
   //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
   
   private static void StartFaceDetection(String bucket, String video) throws Exception{
            
       NotificationChannel channel= new NotificationChannel()
               .withSNSTopicArn(snsTopicArn)
               .withRoleArn(roleArn);
       
       StartFaceDetectionRequest req = new StartFaceDetectionRequest()
               .withVideo(new Video()
                       .withS3Object(new S3Object()
                           .withBucket(bucket)
                           .withName(video)))
               .withNotificationChannel(channel);
                           
                           
       
       StartFaceDetectionResult startLabelDetectionResult = rek.startFaceDetection(req);
       startJobId=startLabelDetectionResult.getJobId();
       
   } 
   
   private static void GetFaceDetectionResults() throws Exception{
       
       int maxResults=10;
       String paginationToken=null;
       GetFaceDetectionResult faceDetectionResult=null;
       
       do{
           if (faceDetectionResult !=null){
               paginationToken = faceDetectionResult.getNextToken();
           }
       
           faceDetectionResult = rek.getFaceDetection(new GetFaceDetectionRequest()
                .withJobId(startJobId)
                .withNextToken(paginationToken)
                .withMaxResults(maxResults));
       
           VideoMetadata videoMetaData=faceDetectionResult.getVideoMetadata();
               
           System.out.println("Format: " + videoMetaData.getFormat());
           System.out.println("Codec: " + videoMetaData.getCodec());
           System.out.println("Duration: " + videoMetaData.getDurationMillis());
           System.out.println("FrameRate: " + videoMetaData.getFrameRate());
               
               
           //Show faces, confidence and detection times
           List<FaceDetection> faces= faceDetectionResult.getFaces();
        
           for (FaceDetection face: faces) { 
               long seconds=face.getTimestamp()/1000;
               System.out.print("Sec: " + Long.toString(seconds) + " ");
               System.out.println(face.getFace().toString());
               System.out.println();           
           }
       } while (faceDetectionResult !=null && faceDetectionResult.getNextToken() != null);
         
           
   }
   ```

   在函数 `main` 中，将以下行: 

   ```
           StartLabelDetection(amzn-s3-demo-bucket, video);
   
           if (GetSQSMessageSuccess()==true)
           	GetLabelDetectionResults();
   ```

   替换为:

   ```
           StartFaceDetection(amzn-s3-demo-bucket, video);
   
           if (GetSQSMessageSuccess()==true)
           	GetFaceDetectionResults();
   ```

------
#### [ Java V2 ]

   此代码取自 AWS 文档 SDK 示例 GitHub 存储库。请在[此处](https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/rekognition/src/main/java/com/example/rekognition/VideoDetectFaces.java)查看完整示例。

   ```
   //snippet-start:[rekognition.java2.recognize_video_faces.import]
   import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
   import software.amazon.awssdk.regions.Region;
   import software.amazon.awssdk.services.rekognition.RekognitionClient;
   import software.amazon.awssdk.services.rekognition.model.*;
   import java.util.List;
   //snippet-end:[rekognition.java2.recognize_video_faces.import]
   
   
   /**
   * Before running this Java V2 code example, set up your development environment, including your credentials.
   *
   * For more information, see the following documentation topic:
   *
   * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
   */
   public class VideoDetectFaces {
   
    private static String startJobId ="";
    public static void main(String[] args) {
   
        final String usage = "\n" +
            "Usage: " +
            "   <bucket> <video> <topicArn> <roleArn>\n\n" +
            "Where:\n" +
            "   bucket - The name of the bucket in which the video is located (for example, (for example, amzn-s3-demo-bucket). \n\n"+
            "   video - The name of video (for example, people.mp4). \n\n" +
            "   topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic. \n\n" +
            "   roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use. \n\n" ;
   
        if (args.length != 4) {
            System.out.println(usage);
            System.exit(1);
        }
   
        String bucket = args[0];
        String video = args[1];
        String topicArn = args[2];
        String roleArn = args[3];
   
        Region region = Region.US_EAST_1;
        RekognitionClient rekClient = RekognitionClient.builder()
            .region(region)
            .credentialsProvider(ProfileCredentialsProvider.create("profile-name"))
            .build();
   
        NotificationChannel channel = NotificationChannel.builder()
            .snsTopicArn(topicArn)
            .roleArn(roleArn)
            .build();
   
        StartFaceDetection(rekClient, channel, bucket, video);
        GetFaceResults(rekClient);
        System.out.println("This example is done!");
        rekClient.close();
    }
   
    // snippet-start:[rekognition.java2.recognize_video_faces.main]
    public static void StartFaceDetection(RekognitionClient rekClient,
                                          NotificationChannel channel,
                                          String bucket,
                                          String video) {
   
        try {
            S3Object s3Obj = S3Object.builder()
                .bucket(bucket)
                .name(video)
                .build();
   
            Video vidOb = Video.builder()
                .s3Object(s3Obj)
                .build();
   
            StartFaceDetectionRequest  faceDetectionRequest = StartFaceDetectionRequest.builder()
                .jobTag("Faces")
                .faceAttributes(FaceAttributes.ALL)
                .notificationChannel(channel)
                .video(vidOb)
                .build();
   
            StartFaceDetectionResponse startLabelDetectionResult = rekClient.startFaceDetection(faceDetectionRequest);
            startJobId=startLabelDetectionResult.jobId();
   
        } catch(RekognitionException e) {
            System.out.println(e.getMessage());
            System.exit(1);
        }
    }
   
    public static void GetFaceResults(RekognitionClient rekClient) {
   
        try {
            String paginationToken=null;
            GetFaceDetectionResponse faceDetectionResponse=null;
            boolean finished = false;
            String status;
            int yy=0 ;
   
            do{
                if (faceDetectionResponse !=null)
                    paginationToken = faceDetectionResponse.nextToken();
   
                GetFaceDetectionRequest recognitionRequest = GetFaceDetectionRequest.builder()
                    .jobId(startJobId)
                    .nextToken(paginationToken)
                    .maxResults(10)
                    .build();
   
                // Wait until the job succeeds
                while (!finished) {
   
                    faceDetectionResponse = rekClient.getFaceDetection(recognitionRequest);
                    status = faceDetectionResponse.jobStatusAsString();
   
                    if (status.compareTo("SUCCEEDED") == 0)
                        finished = true;
                    else {
                        System.out.println(yy + " status is: " + status);
                        Thread.sleep(1000);
                    }
                    yy++;
                }
   
                finished = false;
   
                // Proceed when the job is done - otherwise VideoMetadata is null
                VideoMetadata videoMetaData=faceDetectionResponse.videoMetadata();
                System.out.println("Format: " + videoMetaData.format());
                System.out.println("Codec: " + videoMetaData.codec());
                System.out.println("Duration: " + videoMetaData.durationMillis());
                System.out.println("FrameRate: " + videoMetaData.frameRate());
                System.out.println("Job");
   
                // Show face information
                List<FaceDetection> faces= faceDetectionResponse.faces();
   
                for (FaceDetection face: faces) {
                    String age = face.face().ageRange().toString();
                    String smile = face.face().smile().toString();
                    System.out.println("The detected face is estimated to be"
                                + age + " years old.");
                    System.out.println("There is a smile : "+smile);
                }
   
            } while (faceDetectionResponse !=null && faceDetectionResponse.nextToken() != null);
   
        } catch(RekognitionException | InterruptedException e) {
            System.out.println(e.getMessage());
            System.exit(1);
        }
    }
    // snippet-end:[rekognition.java2.recognize_video_faces.main]
   }
   ```

------
#### [ Python ]

   ```
   #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
       # ============== Faces===============
       def StartFaceDetection(self):
           response=self.rek.start_face_detection(Video={'S3Object': {'Bucket': self.bucket, 'Name': self.video}},
               NotificationChannel={'RoleArn': self.roleArn, 'SNSTopicArn': self.snsTopicArn})
   
           self.startJobId=response['JobId']
           print('Start Job Id: ' + self.startJobId)
   
       def GetFaceDetectionResults(self):
           maxResults = 10
           paginationToken = ''
           finished = False
   
           while finished == False:
               response = self.rek.get_face_detection(JobId=self.startJobId,
                                               MaxResults=maxResults,
                                               NextToken=paginationToken)
   
               print('Codec: ' + response['VideoMetadata']['Codec'])
               print('Duration: ' + str(response['VideoMetadata']['DurationMillis']))
               print('Format: ' + response['VideoMetadata']['Format'])
               print('Frame rate: ' + str(response['VideoMetadata']['FrameRate']))
               print()
   
               for faceDetection in response['Faces']:
                   print('Face: ' + str(faceDetection['Face']))
                   print('Confidence: ' + str(faceDetection['Face']['Confidence']))
                   print('Timestamp: ' + str(faceDetection['Timestamp']))
                   print()
   
               if 'NextToken' in response:
                   paginationToken = response['NextToken']
               else:
                   finished = True
   ```

   在函数 `main` 中，将以下行: 

   ```
       analyzer.StartLabelDetection()
       if analyzer.GetSQSMessageSuccess()==True:
           analyzer.GetLabelDetectionResults()
   ```

   替换为:

   ```
       analyzer.StartFaceDetection()
       if analyzer.GetSQSMessageSuccess()==True:
           analyzer.GetFaceDetectionResults()
   ```

------
**注意**  
如果您已运行[使用 Java 或 Python 分析存储在 Amazon S3 存储桶中的视频 (SDK)](video-analyzing-with-sqs.md)之外的视频示例，则要替换的函数名称将不同。

1. 运行该代码。将显示有关在视频中检测到的人脸的信息。

## GetFaceDetection 操作响应
<a name="getfacedetection-operation-response"></a>

`GetFaceDetection` 将返回一个数组 (`Faces`)，其中包含有关在视频中检测到的人脸的信息。每当在视频中检测到人脸时，都会存在一个数组元素 [FaceDetection](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_FaceDetection.html)。返回的数组元素按时间顺序排列，从视频开始起计时，以毫秒为单位。

以下示例是来自 `GetFaceDetection` 的部分 JSON 响应。在响应中，请注意以下内容：
+ **边界框** – 人脸周围的边界框的坐标。
+ **置信度** - 边界框包含人脸的置信度级别。
+ **人脸标记** - 一组人脸标记。对于每个标记（例如，左眼、右眼和嘴），此响应将提供 `x` 坐标和 `y` 坐标。
+ **面部属性** — 一组面部属性，包括： AgeRange、胡子、情绪、眼镜、性别、 EyesOpen、胡子 MouthOpen、微笑和太阳镜。该值可以是不同的类型，例如布尔值类型（人员是否戴了太阳镜）或字符串（人员是男性还是女性）。此外，对于大多数属性，此响应还为属性提供检测到的值的置信度。请注意，虽然使用时支持 FaceOccluded 和 EyeDirection属性`DetectFaces`，但使用`StartFaceDetection`和分析视频时不支持这些属性`GetFaceDetection`。
+ **时间戳** - 视频中检测到人脸的时间。
+ **分页信息** - 此示例显示一页人脸检测信息。您可以在 `GetFaceDetection` 的 `MaxResults` 输入参数中指定要返回的人员元素数量。如果存在的结果的数量超过了 `MaxResults`，则 `GetFaceDetection` 会返回一个令牌 (`NextToken`)，用于获取下一页的结果。有关更多信息，请参阅 [获取 Amazon Rekognition Video 分析结果](api-video.md#api-video-get)。
+ **视频信息** - 此响应包含有关由 `VideoMetadata` 返回的每页信息中的视频格式 (`GetFaceDetection`) 的信息。
+ **质量** - 描述人脸的亮度和锐度。
+ **姿势** - 描述人脸的旋转。

```
{
    "Faces": [
        {
            "Face": {
                "BoundingBox": {
                    "Height": 0.23000000417232513,
                    "Left": 0.42500001192092896,
                    "Top": 0.16333332657814026,
                    "Width": 0.12937499582767487
                },
                "Confidence": 99.97504425048828,
                "Landmarks": [
                    {
                        "Type": "eyeLeft",
                        "X": 0.46415066719055176,
                        "Y": 0.2572723925113678
                    },
                    {
                        "Type": "eyeRight",
                        "X": 0.5068183541297913,
                        "Y": 0.23705792427062988
                    },
                    {
                        "Type": "nose",
                        "X": 0.49765899777412415,
                        "Y": 0.28383663296699524
                    },
                    {
                        "Type": "mouthLeft",
                        "X": 0.487221896648407,
                        "Y": 0.3452930748462677
                    },
                    {
                        "Type": "mouthRight",
                        "X": 0.5142884850502014,
                        "Y": 0.33167609572410583
                    }
                ],
                "Pose": {
                    "Pitch": 15.966927528381348,
                    "Roll": -15.547388076782227,
                    "Yaw": 11.34195613861084
                },
                "Quality": {
                    "Brightness": 44.80223083496094,
                    "Sharpness": 99.95819854736328
                }
            },
            "Timestamp": 0
        },
        {
            "Face": {
                "BoundingBox": {
                    "Height": 0.20000000298023224,
                    "Left": 0.029999999329447746,
                    "Top": 0.2199999988079071,
                    "Width": 0.11249999701976776
                },
                "Confidence": 99.85971069335938,
                "Landmarks": [
                    {
                        "Type": "eyeLeft",
                        "X": 0.06842322647571564,
                        "Y": 0.3010137975215912
                    },
                    {
                        "Type": "eyeRight",
                        "X": 0.10543643683195114,
                        "Y": 0.29697132110595703
                    },
                    {
                        "Type": "nose",
                        "X": 0.09569807350635529,
                        "Y": 0.33701086044311523
                    },
                    {
                        "Type": "mouthLeft",
                        "X": 0.0732642263174057,
                        "Y": 0.3757539987564087
                    },
                    {
                        "Type": "mouthRight",
                        "X": 0.10589495301246643,
                        "Y": 0.3722417950630188
                    }
                ],
                "Pose": {
                    "Pitch": -0.5589138865470886,
                    "Roll": -5.1093974113464355,
                    "Yaw": 18.69594955444336
                },
                "Quality": {
                    "Brightness": 43.052337646484375,
                    "Sharpness": 99.68138885498047
                }
            },
            "Timestamp": 0
        },
        {
            "Face": {
                "BoundingBox": {
                    "Height": 0.2177777737379074,
                    "Left": 0.7593749761581421,
                    "Top": 0.13333334028720856,
                    "Width": 0.12250000238418579
                },
                "Confidence": 99.63436889648438,
                "Landmarks": [
                    {
                        "Type": "eyeLeft",
                        "X": 0.8005779385566711,
                        "Y": 0.20915353298187256
                    },
                    {
                        "Type": "eyeRight",
                        "X": 0.8391435146331787,
                        "Y": 0.21049551665782928
                    },
                    {
                        "Type": "nose",
                        "X": 0.8191410899162292,
                        "Y": 0.2523227035999298
                    },
                    {
                        "Type": "mouthLeft",
                        "X": 0.8093273043632507,
                        "Y": 0.29053622484207153
                    },
                    {
                        "Type": "mouthRight",
                        "X": 0.8366993069648743,
                        "Y": 0.29101791977882385
                    }
                ],
                "Pose": {
                    "Pitch": 3.165884017944336,
                    "Roll": 1.4182015657424927,
                    "Yaw": -11.151537895202637
                },
                "Quality": {
                    "Brightness": 28.910892486572266,
                    "Sharpness": 97.61507415771484
                }
            },
            "Timestamp": 0
        }.......

    ],
    "JobStatus": "SUCCEEDED",
    "NextToken": "i7fj5XPV/fwviXqz0eag9Ow332Jd5G8ZGWf7hooirD/6V1qFmjKFOQZ6QPWUiqv29HbyuhMNqQ==",
    "VideoMetadata": {
        "Codec": "h264",
        "DurationMillis": 67301,
        "FileExtension": "mp4",
        "Format": "QuickTime / MOV",
        "FrameHeight": 1080,
        "FrameRate": 29.970029830932617,
        "FrameWidth": 1920
    }
}
```