

本文為英文版的機器翻譯版本，如內容有任何歧義或不一致之處，概以英文版為準。

# 比較映像中的人臉
<a name="faces-comparefaces"></a>

使用 Rekognition，您可以使用 [CompareFaces](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_CompareFaces.html) 操作在兩個影像之間比較臉部。此功能適用於身分驗證或相片比對等應用程式。

CompareFaces 會將*來源*影像中的臉部與*目標*影像中的每個臉部進行比較。影像會以下列其中一種方式傳遞至 CompareFaces：
+ 影像的 base64 編碼表示法。
+ Amazon S3 物件。

**臉部偵測與臉部比較 **

臉部比較與臉部偵測不同。臉部偵測 （使用 DetectFaces) 只會識別影像或影片中臉部的存在和位置。相反地，臉部比較涉及將來源影像中偵測到的臉部與目標影像中的臉部進行比較，以尋找相符項目。

**相似性閾值**

使用 `similarityThreshold` 參數來定義要包含在回應中相符項目的最低可信度層級。根據預設，回應中只會傳回相似度分數大於或等於 80% 的臉部。

**注意**  
`CompareFaces` 使用機器學習演算法，這是概率性的。假陰性指的是不正確的預測，指出目標映像中的人臉與來源映像中的人臉相比具有較低的相似度置信度分數。為了減少偽陰性的可能性，建議您將目標映像與多個來源映像進行比較。如果您打算使用 `CompareFaces` 做出會影響個人權利、隱私權或服務存取權的決定，我們建議您在採取行動之前，將結果傳遞給人員進行審查並進一步驗證。

 

下列程式碼範例示範如何針對各種 AWS SDKs 使用 CompareFaces 操作。在此 AWS CLI 範例中，您會將兩個 JPEG 影像上傳至 Amazon S3 儲存貯體，並指定物件金鑰名稱。在其他範例中，您會從本機檔案系統載入兩個檔案，並以映像位元組陣列的方式加以輸入。

**比較臉部**

1. 如果您尚未執行：

   1. 建立或更新具有 `AmazonRekognitionFullAccess`和 `AmazonS3ReadOnlyAccess`（僅限AWS CLI 範例） 許可的使用者。如需詳細資訊，請參閱[步驟 1：設定 AWS 帳戶並建立使用者](setting-up.md#setting-up-iam)。

   1. 安裝和設定 AWS CLI 和 AWS SDKs。如需詳細資訊，請參閱[步驟 2：設定 AWS CLI 和 AWS SDKs](setup-awscli-sdk.md)。

1. 使用下列程式碼範例來呼叫 `CompareFaces` 操作。

------
#### [ Java ]

   此範例顯示有關來源和目標映像 (從本機檔案系統載入) 中相符臉孔的資訊。

   將 `sourceImage` 與 `targetImage` 的值取代為來源和目標映像的路徑和檔案名稱。

   ```
   //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
   package aws.example.rekognition.image;
   import com.amazonaws.services.rekognition.AmazonRekognition;
   import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;
   import com.amazonaws.services.rekognition.model.Image;
   import com.amazonaws.services.rekognition.model.BoundingBox;
   import com.amazonaws.services.rekognition.model.CompareFacesMatch;
   import com.amazonaws.services.rekognition.model.CompareFacesRequest;
   import com.amazonaws.services.rekognition.model.CompareFacesResult;
   import com.amazonaws.services.rekognition.model.ComparedFace;
   import java.util.List;
   import java.io.File;
   import java.io.FileInputStream;
   import java.io.InputStream;
   import java.nio.ByteBuffer;
   import com.amazonaws.util.IOUtils;
   
   public class CompareFaces {
   
      public static void main(String[] args) throws Exception{
          Float similarityThreshold = 70F;
          String sourceImage = "source.jpg";
          String targetImage = "target.jpg";
          ByteBuffer sourceImageBytes=null;
          ByteBuffer targetImageBytes=null;
   
          AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient();
   
          //Load source and target images and create input parameters
          try (InputStream inputStream = new FileInputStream(new File(sourceImage))) {
             sourceImageBytes = ByteBuffer.wrap(IOUtils.toByteArray(inputStream));
          }
          catch(Exception e)
          {
              System.out.println("Failed to load source image " + sourceImage);
              System.exit(1);
          }
          try (InputStream inputStream = new FileInputStream(new File(targetImage))) {
              targetImageBytes = ByteBuffer.wrap(IOUtils.toByteArray(inputStream));
          }
          catch(Exception e)
          {
              System.out.println("Failed to load target images: " + targetImage);
              System.exit(1);
          }
   
          Image source=new Image()
               .withBytes(sourceImageBytes);
          Image target=new Image()
               .withBytes(targetImageBytes);
   
          CompareFacesRequest request = new CompareFacesRequest()
                  .withSourceImage(source)
                  .withTargetImage(target)
                  .withSimilarityThreshold(similarityThreshold);
   
          // Call operation
          CompareFacesResult compareFacesResult=rekognitionClient.compareFaces(request);
   
   
          // Display results
          List <CompareFacesMatch> faceDetails = compareFacesResult.getFaceMatches();
          for (CompareFacesMatch match: faceDetails){
            ComparedFace face= match.getFace();
            BoundingBox position = face.getBoundingBox();
            System.out.println("Face at " + position.getLeft().toString()
                  + " " + position.getTop()
                  + " matches with " + match.getSimilarity().toString()
                  + "% confidence.");
   
          }
          List<ComparedFace> uncompared = compareFacesResult.getUnmatchedFaces();
   
          System.out.println("There was " + uncompared.size()
               + " face(s) that did not match");
      }
   }
   ```

------
#### [ Java V2 ]

   此程式碼取自 AWS 文件開發套件範例 GitHub 儲存庫。請參閱[此處](https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/rekognition/src/main/java/com/example/rekognition/CompareFaces.java)的完整範例。

   ```
   import java.util.List;
   
   import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
   import software.amazon.awssdk.regions.Region;
   import software.amazon.awssdk.services.rekognition.RekognitionClient;
   import software.amazon.awssdk.services.rekognition.model.RekognitionException;
   import software.amazon.awssdk.services.rekognition.model.Image;
   import software.amazon.awssdk.services.rekognition.model.BoundingBox;
   import software.amazon.awssdk.services.rekognition.model.CompareFacesMatch;
   import software.amazon.awssdk.services.rekognition.model.CompareFacesRequest;
   import software.amazon.awssdk.services.rekognition.model.CompareFacesResponse;
   import software.amazon.awssdk.services.rekognition.model.ComparedFace;
   import software.amazon.awssdk.core.SdkBytes;
   import java.io.FileInputStream;
   import java.io.FileNotFoundException;
   import java.io.InputStream;
   
   // snippet-end:[rekognition.java2.detect_faces.import]
   
   /**
    * Before running this Java V2 code example, set up your development environment, including your credentials.
    *
    * For more information, see the following documentation topic:
    *
    * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
    */
   public class CompareFaces {
   
       public static void main(String[] args) {
   
           final String usage = "\n" +
               "Usage: " +
               "   <pathSource> <pathTarget>\n\n" +
               "Where:\n" +
               "   pathSource - The path to the source image (for example, C:\\AWS\\pic1.png). \n " +
               "   pathTarget - The path to the target image (for example, C:\\AWS\\pic2.png). \n\n";
   
           if (args.length != 2) {
               System.out.println(usage);
               System.exit(1);
           }
   
           Float similarityThreshold = 70F;
           String sourceImage = args[0];
           String targetImage = args[1];
           Region region = Region.US_EAST_1;
           RekognitionClient rekClient = RekognitionClient.builder()
               .region(region)
               .credentialsProvider(ProfileCredentialsProvider.create("profile-name"))
               .build();
   
           compareTwoFaces(rekClient, similarityThreshold, sourceImage, targetImage);
           rekClient.close();
      }
   
       // snippet-start:[rekognition.java2.compare_faces.main]
       public static void compareTwoFaces(RekognitionClient rekClient, Float similarityThreshold, String sourceImage, String targetImage) {
           try {
               InputStream sourceStream = new FileInputStream(sourceImage);
               InputStream tarStream = new FileInputStream(targetImage);
               SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream);
               SdkBytes targetBytes = SdkBytes.fromInputStream(tarStream);
   
               // Create an Image object for the source image.
               Image souImage = Image.builder()
                   .bytes(sourceBytes)
                   .build();
   
               Image tarImage = Image.builder()
                   .bytes(targetBytes)
                   .build();
   
               CompareFacesRequest facesRequest = CompareFacesRequest.builder()
                   .sourceImage(souImage)
                   .targetImage(tarImage)
                   .similarityThreshold(similarityThreshold)
                   .build();
   
               // Compare the two images.
               CompareFacesResponse compareFacesResult = rekClient.compareFaces(facesRequest);
               List<CompareFacesMatch> faceDetails = compareFacesResult.faceMatches();
               for (CompareFacesMatch match: faceDetails){
                   ComparedFace face= match.face();
                   BoundingBox position = face.boundingBox();
                   System.out.println("Face at " + position.left().toString()
                           + " " + position.top()
                           + " matches with " + face.confidence().toString()
                           + "% confidence.");
   
               }
               List<ComparedFace> uncompared = compareFacesResult.unmatchedFaces();
               System.out.println("There was " + uncompared.size() + " face(s) that did not match");
               System.out.println("Source image rotation: " + compareFacesResult.sourceImageOrientationCorrection());
               System.out.println("target image rotation: " + compareFacesResult.targetImageOrientationCorrection());
   
           } catch(RekognitionException | FileNotFoundException e) {
               System.out.println("Failed to load source image " + sourceImage);
               System.exit(1);
           }
       }
       // snippet-end:[rekognition.java2.compare_faces.main]
   }
   ```

------
#### [ AWS CLI ]

   此範例顯示來自 `compare-faces` AWS CLI 操作的 JSON 輸出。

   將 `amzn-s3-demo-bucket` 取代為 Amazon S3 儲存貯體的名稱，其中包含來源和目標映像。將 `source.jpg` 和 `target.jpg` 取代為來源和目標映像的檔案名稱。

   ```
   aws rekognition compare-faces --target-image \
   "{"S3Object":{"Bucket":"amzn-s3-demo-bucket","Name":"image-name"}}" \
   --source-image "{"S3Object":{"Bucket":"amzn-s3-demo-bucket","Name":"image-name"}}" 
   --profile profile-name
   ```

    如果您在 Windows 裝置上存取 CLI，請使用雙引號而非單引號，並以反斜線 (即\$1) 替代內部雙引號，以解決您可能遇到的任何剖析器錯誤。例如，請參閱下列內容：

   ```
   aws rekognition compare-faces --target-image "{\"S3Object\":{\"Bucket\":\"amzn-s3-demo-bucket\",\"Name\":\"image-name\"}}" \ 
   --source-image "{\"S3Object\":{\"Bucket\":\"amzn-s3-demo-bucket\",\"Name\":\"image-name\"}}" --profile profile-name
   ```

------
#### [ Python ]

   此範例顯示有關來源和目標映像 (從本機檔案系統載入) 中相符臉孔的資訊，表面載入在從本機檔案系統。

   將 `source_file` 與 `target_file` 的值取代為來源和目標映像的路徑和檔案名稱。將建立 Rekognition 工作階段的行中 `profile_name` 值取代為您開發人員設定檔的名稱。

   ```
   # Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   # PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
   import boto3
   
   def compare_faces(sourceFile, targetFile):
   
       session = boto3.Session(profile_name='profile-name')
       client = session.client('rekognition')
   
       imageSource = open(sourceFile, 'rb')
       imageTarget = open(targetFile, 'rb')
   
       response = client.compare_faces(SimilarityThreshold=80,
                                       SourceImage={'Bytes': imageSource.read()},
                                       TargetImage={'Bytes': imageTarget.read()})
   
       for faceMatch in response['FaceMatches']:
           position = faceMatch['Face']['BoundingBox']
           similarity = str(faceMatch['Similarity'])
           print('The face at ' +
                 str(position['Left']) + ' ' +
                 str(position['Top']) +
                 ' matches with ' + similarity + '% confidence')
   
       imageSource.close()
       imageTarget.close()
       return len(response['FaceMatches'])
   
   def main():
       source_file = 'source-file-name'
       target_file = 'target-file-name'
       face_matches = compare_faces(source_file, target_file)
       print("Face matches: " + str(face_matches))
   
   if __name__ == "__main__":
       main()
   ```

------
#### [ .NET ]

   此範例顯示有關來源和目標映像 (從本機檔案系統載入) 中相符臉孔的資訊。

   將 `sourceImage` 與 `targetImage` 的值取代為來源和目標映像的路徑和檔案名稱。

   ```
   //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
   //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
   
   using System;
   using System.IO;
   using Amazon.Rekognition;
   using Amazon.Rekognition.Model;
   
   public class CompareFaces
   {
       public static void Example()
       {
           float similarityThreshold = 70F;
           String sourceImage = "source.jpg";
           String targetImage = "target.jpg";
   
           AmazonRekognitionClient rekognitionClient = new AmazonRekognitionClient();
   
           Amazon.Rekognition.Model.Image imageSource = new Amazon.Rekognition.Model.Image();
           try
           {
               using (FileStream fs = new FileStream(sourceImage, FileMode.Open, FileAccess.Read))
               {
                   byte[] data = new byte[fs.Length];
                   fs.Read(data, 0, (int)fs.Length);
                   imageSource.Bytes = new MemoryStream(data);
               }
           }
           catch (Exception)
           {
               Console.WriteLine("Failed to load source image: " + sourceImage);
               return;
           }
   
           Amazon.Rekognition.Model.Image imageTarget = new Amazon.Rekognition.Model.Image();
           try
           {
               using (FileStream fs = new FileStream(targetImage, FileMode.Open, FileAccess.Read))
               {
                   byte[] data = new byte[fs.Length];
                   data = new byte[fs.Length];
                   fs.Read(data, 0, (int)fs.Length);
                   imageTarget.Bytes = new MemoryStream(data);
               }
           }
           catch (Exception)
           {
               Console.WriteLine("Failed to load target image: " + targetImage);
               return;
           }
   
           CompareFacesRequest compareFacesRequest = new CompareFacesRequest()
           {
               SourceImage = imageSource,
               TargetImage = imageTarget,
               SimilarityThreshold = similarityThreshold
           };
   
           // Call operation
           CompareFacesResponse compareFacesResponse = rekognitionClient.CompareFaces(compareFacesRequest);
   
           // Display results
           foreach(CompareFacesMatch match in compareFacesResponse.FaceMatches)
           {
               ComparedFace face = match.Face;
               BoundingBox position = face.BoundingBox;
               Console.WriteLine("Face at " + position.Left
                     + " " + position.Top
                     + " matches with " + match.Similarity
                     + "% confidence.");
           }
   
           Console.WriteLine("There was " + compareFacesResponse.UnmatchedFaces.Count + " face(s) that did not match");
   
       }
   }
   ```

------
#### [ Ruby ]

   此範例顯示有關來源和目標映像 (從本機檔案系統載入) 中相符臉孔的資訊，表面載入在從本機檔案系統。

   將 `photo_source` 與 `photo_target` 的值取代為來源和目標映像的路徑和檔案名稱。

   ```
     # Add to your Gemfile
      # gem 'aws-sdk-rekognition'
      require 'aws-sdk-rekognition'
      credentials = Aws::Credentials.new(
         ENV['AWS_ACCESS_KEY_ID'],
         ENV['AWS_SECRET_ACCESS_KEY']
      )
      bucket        = 'bucket' # the bucketname without s3://
      photo_source  = 'source.jpg'
      photo_target  = 'target.jpg'
      client   = Aws::Rekognition::Client.new credentials: credentials
      attrs = {
        source_image: {
          s3_object: {
            bucket: bucket,
            name: photo_source
          },
        },
        target_image: {
          s3_object: {
            bucket: bucket,
            name: photo_target
          },
        },
        similarity_threshold: 70
      }
      response = client.compare_faces attrs
      response.face_matches.each do |face_match|
        position   = face_match.face.bounding_box
        similarity = face_match.similarity
        puts "The face at: #{position.left}, #{position.top} matches with #{similarity} % confidence"
      end
   ```

------
#### [ Node.js ]

   此範例顯示有關來源和目標映像 (從本機檔案系統載入) 中相符臉孔的資訊，表面載入在從本機檔案系統。

   將 `photo_source` 與 `photo_target` 的值取代為來源和目標映像的路徑和檔案名稱。將建立 Rekognition 工作階段的行中 `profile_name` 值取代為您開發人員設定檔的名稱。

   ```
   // Load the SDK
   var AWS = require('aws-sdk');
   const bucket = 'bucket-name' // the bucket name without s3://
   const photo_source  = 'photo-source-name' // path and the name of file
   const photo_target = 'photo-target-name'
   
   var credentials = new AWS.SharedIniFileCredentials({profile: 'profile-name'});
   AWS.config.credentials = credentials;
   AWS.config.update({region:'region-name'});
   
   const client = new AWS.Rekognition();
      const params = {
        SourceImage: {
          S3Object: {
            Bucket: bucket,
            Name: photo_source
          },
        },
        TargetImage: {
          S3Object: {
            Bucket: bucket,
            Name: photo_target
          },
        },
        SimilarityThreshold: 70
      }
      client.compareFaces(params, function(err, response) {
        if (err) {
          console.log(err, err.stack); // an error occurred
        } else {
          response.FaceMatches.forEach(data => {
            let position   = data.Face.BoundingBox
            let similarity = data.Similarity
            console.log(`The face at: ${position.Left}, ${position.Top} matches with ${similarity} % confidence`)
          }) // for response.faceDetails
        } // if
      });
   ```

------

## CompareFaces 操作要求
<a name="comparefaces-request"></a>

`CompareFaces` 的輸入是映像。在此範例中，來源和目標映像從本機檔案系統載入。`SimilarityThreshold` 輸入參數指定比較臉孔必須符合的最低可信度應包含在回應中。如需詳細資訊，請參閱 [使用映像](images.md)。

```
{
    "SourceImage": {
        "Bytes": "/9j/4AAQSk2Q==..."
    },
    "TargetImage": {
        "Bytes": "/9j/4O1Q==..."
    },
    "SimilarityThreshold": 70
}
```

## CompareFaces 操作回應
<a name="comparefaces-response"></a>

回應包括：
+ 臉部比對的陣列：每個比對臉部具有相似度分數和中繼資料的相符臉部清單。如果多個臉部相符， `faceMatches`

   陣列包含所有臉部配對。
+ 臉部配對詳細資訊：每個配對的臉部也提供週框方塊、可信度值、指標位置和相似度分數。
+ 不相符的臉部清單：回應也包含目標影像中不符合來源影像臉部的臉部。包含每個不相符臉部的週框方塊。
+ 來源臉部資訊：包括用於比較之來源影像的臉部相關資訊，包括週框方塊和可信度值。

此範例顯示在目標影像中找到一個臉部相符項目。對於該相符的人臉，該回應會提供週框方塊與可信度值 (Amazon Rekognition 對週框方塊包含人臉的可信度)。99.99 的相似度分數表示臉部的相似度。此範例也顯示 Amazon Rekognition 在目標映像中找到的臉部與來源映像中分析的臉部不相符。

```
{
    "FaceMatches": [{
        "Face": {
            "BoundingBox": {
                "Width": 0.5521978139877319,
                "Top": 0.1203877404332161,
                "Left": 0.23626373708248138,
                "Height": 0.3126954436302185
            },
            "Confidence": 99.98751068115234,
            "Pose": {
                "Yaw": -82.36799621582031,
                "Roll": -62.13221740722656,
                "Pitch": 0.8652129173278809
            },
            "Quality": {
                "Sharpness": 99.99880981445312,
                "Brightness": 54.49755096435547
            },
            "Landmarks": [{
                    "Y": 0.2996366024017334,
                    "X": 0.41685718297958374,
                    "Type": "eyeLeft"
                },
                {
                    "Y": 0.2658946216106415,
                    "X": 0.4414493441581726,
                    "Type": "eyeRight"
                },
                {
                    "Y": 0.3465650677680969,
                    "X": 0.48636093735694885,
                    "Type": "nose"
                },
                {
                    "Y": 0.30935320258140564,
                    "X": 0.6251809000968933,
                    "Type": "mouthLeft"
                },
                {
                    "Y": 0.26942989230155945,
                    "X": 0.6454493403434753,
                    "Type": "mouthRight"
                }
            ]
        },
        "Similarity": 100.0
    }],
    "SourceImageOrientationCorrection": "ROTATE_90",
    "TargetImageOrientationCorrection": "ROTATE_90",
    "UnmatchedFaces": [{
        "BoundingBox": {
            "Width": 0.4890109896659851,
            "Top": 0.6566604375839233,
            "Left": 0.10989011079072952,
            "Height": 0.278298944234848
        },
        "Confidence": 99.99992370605469,
        "Pose": {
            "Yaw": 51.51519012451172,
            "Roll": -110.32493591308594,
            "Pitch": -2.322134017944336
        },
        "Quality": {
            "Sharpness": 99.99671173095703,
            "Brightness": 57.23163986206055
        },
        "Landmarks": [{
                "Y": 0.8288310766220093,
                "X": 0.3133862614631653,
                "Type": "eyeLeft"
            },
            {
                "Y": 0.7632885575294495,
                "X": 0.28091415762901306,
                "Type": "eyeRight"
            },
            {
                "Y": 0.7417283654212952,
                "X": 0.3631140887737274,
                "Type": "nose"
            },
            {
                "Y": 0.8081989884376526,
                "X": 0.48565614223480225,
                "Type": "mouthLeft"
            },
            {
                "Y": 0.7548204660415649,
                "X": 0.46090251207351685,
                "Type": "mouthRight"
            }
        ]
    }],
    "SourceImageFace": {
        "BoundingBox": {
            "Width": 0.5521978139877319,
            "Top": 0.1203877404332161,
            "Left": 0.23626373708248138,
            "Height": 0.3126954436302185
        },
        "Confidence": 99.98751068115234
    }
}
```