本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。
将人脸添加到集合
您可以使用 IndexFaces 操作检测图像中的人脸并将人脸添加到集合中。对于检测到的每张人脸,Amazon Rekognition 将提取人脸特征并将特征信息存储到数据库中。此外,该命令将检测到的每个人脸的元数据存储在指定的人脸集合中。Amazon Rekognition 不存储实际的图像字节。
有关提供合适的人脸用于编制索引的信息,请参阅有关面部比较输入图像的建议。
对于每个人脸,IndexFaces 操作保留以下信息:
-
多维人脸特征 – IndexFaces 使用人脸分析来提取有关人脸特征的多维信息,并将该信息存储在人脸集合中。您无法直接访问此信息。不过,Amazon Rekognition 在人脸集合中搜索匹配的人脸时将使用此信息。
-
元数据 — 每张人脸的元数据包括边界框、由 Amazon Rekognition IDs 分配的置信度(边界框包含人脸)(人脸 ID 和图片 ID),以及请求中的外部图片 ID(如果您提供了)。在响应 IndexFaces API 调用时,将为您返回此信息。有关示例,请参阅以下示例响应中的 face 元素。
该服务将返回此元数据以响应以下 API 调用:
IndexFaces 编入索引的人脸数量取决于与输入集合关联的人脸检测模型的版本。有关更多信息,请参阅 了解模型版本控制。
有关索引的人脸的信息以 FaceRecord 对象数组的形式返回。
您可能希望将索引的人脸与检测到人脸的图像相关联。例如,您可能想要保留图像的客户端索引和图像中的人脸。要将人脸与图像关联,请在 ExternalImageId 请求参数中指定图像 ID。图像 ID 可以是您创建的文件名或其他 ID。
除了 API 在人脸集合中保留的前面的信息之外,API 还返回集合中未保留的人脸详细信息。(请参阅以下示例响应中的 faceDetail 元素)。
DetectFaces 将返回相同的信息,因此您无需对同一张图像调用 DetectFaces 和 IndexFaces。
筛选人脸
该 IndexFaces 操作使您可以过滤从图像中编制索引的面孔。借助 IndexFaces,您可以指定要编制索引的人脸的最大数目,也可以选择仅为检测到的高质量人脸编制索引。
您可以使用 MaxFaces 输入参数指定由 IndexFaces 编制索引的人脸的最大数目。当您想为图像中的最大人脸编制索引而不想对较小人脸(例如,背景中站立的人的脸)编制索引时,这很有用。
默认情况下, IndexFaces 选择用于筛选出人脸的质量条。您可以使用 QualityFilter 输入参数显式设置质量条。值为:
IndexFaces 根据以下条件筛选出人脸:
-
与图像尺寸相比,人脸太小。
-
人脸太模糊。
-
图像太暗。
-
人脸的姿势很极端。
-
人脸没有足够的细节,不适合人脸搜索。
有关 IndexFaces 未编制索引的人脸的信息将以 UnindexedFace 对象数组的形式返回。Reasons 数组包含有关未为人脸编制索引的原因的列表。例如,值 EXCEEDS_MAX_FACES 表示未为人脸编制索引,因为检测到的人脸数已达到 MaxFaces 所指定的人脸数。
有关更多信息,请参阅 管理集合中的人脸。
将人脸添加到集合 (SDK)
-
如果您尚未执行以下操作,请:
-
使用 AmazonRekognitionFullAccess 和 AmazonS3ReadOnlyAccess 权限创建或更新用户。有关更多信息,请参阅 步骤 1:设置 AWS 账户并创建用户。
-
安装并配置 AWS CLI 和 AWS SDKs。有关更多信息,请参阅 第 2 步:设置 AWS CLI 和 AWS SDKs。
-
将图像(包含一个或多个人脸)上传到您的 Amazon S3 存储桶。
有关说明,请参阅《Amazon Simple Storage Service 用户指南》中的将对象上传到 Amazon S3。
-
使用以下示例调用 IndexFaces 操作。
- Java
-
此示例显示添加到集合的人脸的人脸标识符。
将 collectionId 的值更改为您要向其中添加人脸的集合的名称。将bucket和photo的值替换为您在步骤 2 中使用的 Amazon S3 存储桶和图像的名称。.withMaxFaces(1) 参数将索引的人脸数限制为 1。删除或更改其值以满足您的需求。
//Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
//PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
package aws.example.rekognition.image;
import com.amazonaws.services.rekognition.AmazonRekognition;
import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder;
import com.amazonaws.services.rekognition.model.FaceRecord;
import com.amazonaws.services.rekognition.model.Image;
import com.amazonaws.services.rekognition.model.IndexFacesRequest;
import com.amazonaws.services.rekognition.model.IndexFacesResult;
import com.amazonaws.services.rekognition.model.QualityFilter;
import com.amazonaws.services.rekognition.model.S3Object;
import com.amazonaws.services.rekognition.model.UnindexedFace;
import java.util.List;
public class AddFacesToCollection {
public static final String collectionId = "MyCollection";
public static final String bucket = "bucket";
public static final String photo = "input.jpg";
public static void main(String[] args) throws Exception {
AmazonRekognition rekognitionClient = AmazonRekognitionClientBuilder.defaultClient();
Image image = new Image()
.withS3Object(new S3Object()
.withBucket(bucket)
.withName(photo));
IndexFacesRequest indexFacesRequest = new IndexFacesRequest()
.withImage(image)
.withQualityFilter(QualityFilter.AUTO)
.withMaxFaces(1)
.withCollectionId(collectionId)
.withExternalImageId(photo)
.withDetectionAttributes("DEFAULT");
IndexFacesResult indexFacesResult = rekognitionClient.indexFaces(indexFacesRequest);
System.out.println("Results for " + photo);
System.out.println("Faces indexed:");
List<FaceRecord> faceRecords = indexFacesResult.getFaceRecords();
for (FaceRecord faceRecord : faceRecords) {
System.out.println(" Face ID: " + faceRecord.getFace().getFaceId());
System.out.println(" Location:" + faceRecord.getFaceDetail().getBoundingBox().toString());
}
List<UnindexedFace> unindexedFaces = indexFacesResult.getUnindexedFaces();
System.out.println("Faces not indexed:");
for (UnindexedFace unindexedFace : unindexedFaces) {
System.out.println(" Location:" + unindexedFace.getFaceDetail().getBoundingBox().toString());
System.out.println(" Reasons:");
for (String reason : unindexedFace.getReasons()) {
System.out.println(" " + reason);
}
}
}
}
- Java V2
-
此代码取自 AWS 文档 SDK 示例 GitHub 存储库。请在此处查看完整示例。
//snippet-start:[rekognition.java2.add_faces_collection.import]
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.core.SdkBytes;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.rekognition.RekognitionClient;
import software.amazon.awssdk.services.rekognition.model.IndexFacesResponse;
import software.amazon.awssdk.services.rekognition.model.IndexFacesRequest;
import software.amazon.awssdk.services.rekognition.model.Image;
import software.amazon.awssdk.services.rekognition.model.QualityFilter;
import software.amazon.awssdk.services.rekognition.model.Attribute;
import software.amazon.awssdk.services.rekognition.model.FaceRecord;
import software.amazon.awssdk.services.rekognition.model.UnindexedFace;
import software.amazon.awssdk.services.rekognition.model.RekognitionException;
import software.amazon.awssdk.services.rekognition.model.Reason;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.InputStream;
import java.util.List;
//snippet-end:[rekognition.java2.add_faces_collection.import]
/**
* Before running this Java V2 code example, set up your development environment, including your credentials.
*
* For more information, see the following documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class AddFacesToCollection {
public static void main(String[] args) {
final String usage = "\n" +
"Usage: " +
" <collectionId> <sourceImage>\n\n" +
"Where:\n" +
" collectionName - The name of the collection.\n" +
" sourceImage - The path to the image (for example, C:\\AWS\\pic1.png). \n\n";
if (args.length != 2) {
System.out.println(usage);
System.exit(1);
}
String collectionId = args[0];
String sourceImage = args[1];
Region region = Region.US_EAST_1;
RekognitionClient rekClient = RekognitionClient.builder()
.region(region)
.credentialsProvider(ProfileCredentialsProvider.create("profile-name"))
.build();
addToCollection(rekClient, collectionId, sourceImage);
rekClient.close();
}
// snippet-start:[rekognition.java2.add_faces_collection.main]
public static void addToCollection(RekognitionClient rekClient, String collectionId, String sourceImage) {
try {
InputStream sourceStream = new FileInputStream(sourceImage);
SdkBytes sourceBytes = SdkBytes.fromInputStream(sourceStream);
Image souImage = Image.builder()
.bytes(sourceBytes)
.build();
IndexFacesRequest facesRequest = IndexFacesRequest.builder()
.collectionId(collectionId)
.image(souImage)
.maxFaces(1)
.qualityFilter(QualityFilter.AUTO)
.detectionAttributes(Attribute.DEFAULT)
.build();
IndexFacesResponse facesResponse = rekClient.indexFaces(facesRequest);
System.out.println("Results for the image");
System.out.println("\n Faces indexed:");
List<FaceRecord> faceRecords = facesResponse.faceRecords();
for (FaceRecord faceRecord : faceRecords) {
System.out.println(" Face ID: " + faceRecord.face().faceId());
System.out.println(" Location:" + faceRecord.faceDetail().boundingBox().toString());
}
List<UnindexedFace> unindexedFaces = facesResponse.unindexedFaces();
System.out.println("Faces not indexed:");
for (UnindexedFace unindexedFace : unindexedFaces) {
System.out.println(" Location:" + unindexedFace.faceDetail().boundingBox().toString());
System.out.println(" Reasons:");
for (Reason reason : unindexedFace.reasons()) {
System.out.println("Reason: " + reason);
}
}
} catch (RekognitionException | FileNotFoundException e) {
System.out.println(e.getMessage());
System.exit(1);
}
}
// snippet-end:[rekognition.java2.add_faces_collection.main]
}
- AWS CLI
-
此 AWS CLI 命令显示 index-faces CLI 操作的 JSON 输出。
将 collection-id 的值替换为您希望在其中存储人脸的集合的名称。将Bucket和Name的值替换为您在步骤 2 中使用的 Amazon S3 存储桶和图像文件。max-faces 参数将索引的人脸数限制为 1。删除或更改其值以满足您的需求。将创建 Rekognition 会话的行中的profile_name值替换为您的开发人员资料的名称。
aws rekognition index-faces --image '{"S3Object":{"Bucket":"bucket-name","Name":"file-name"}}' --collection-id "collection-id" \
--max-faces 1 --quality-filter "AUTO" --detection-attributes "ALL" \
--external-image-id "example-image.jpg" --profile profile-name
如果您在 Windows 设备上访问 CLI,请使用双引号代替单引号,并用反斜杠(即 \)对内部双引号进行转义,以解决可能遇到的任何解析器错误。例如,请参阅以下内容:
aws rekognition index-faces --image "{\"S3Object\":{\"Bucket\":\"bucket-name\",\"Name\":\"image-name\"}}" \
--collection-id "collection-id" --max-faces 1 --quality-filter "AUTO" --detection-attributes "ALL" \
--external-image-id "example-image.jpg" --profile profile-name
- Python
-
此示例显示添加到集合的人脸的人脸标识符。
将 collectionId 的值更改为您要向其中添加人脸的集合的名称。将bucket和photo的值替换为您在步骤 2 中使用的 Amazon S3 存储桶和图像的名称。MaxFaces 输入参数将索引的人脸数限制为 1。删除或更改其值以满足您的需求。将创建 Rekognition 会话的行中的profile_name值替换为您的开发人员资料的名称。
# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
import boto3
def add_faces_to_collection(bucket, photo, collection_id):
session = boto3.Session(profile_name='profile-name')
client = session.client('rekognition')
response = client.index_faces(CollectionId=collection_id,
Image={'S3Object': {'Bucket': bucket, 'Name': photo}},
ExternalImageId=photo,
MaxFaces=1,
QualityFilter="AUTO",
DetectionAttributes=['ALL'])
print('Results for ' + photo)
print('Faces indexed:')
for faceRecord in response['FaceRecords']:
print(' Face ID: ' + faceRecord['Face']['FaceId'])
print(' Location: {}'.format(faceRecord['Face']['BoundingBox']))
print('Faces not indexed:')
for unindexedFace in response['UnindexedFaces']:
print(' Location: {}'.format(unindexedFace['FaceDetail']['BoundingBox']))
print(' Reasons:')
for reason in unindexedFace['Reasons']:
print(' ' + reason)
return len(response['FaceRecords'])
def main():
bucket = 'amzn-s3-demo-bucket'
collection_id = 'collection-id'
photo = 'photo-name'
indexed_faces_count = add_faces_to_collection(bucket, photo, collection_id)
print("Faces indexed count: " + str(indexed_faces_count))
if __name__ == "__main__":
main()
- .NET
-
此示例显示添加到集合的人脸的人脸标识符。
将 collectionId 的值更改为您要向其中添加人脸的集合的名称。将bucket和photo的值替换为您在步骤 2 中使用的 Amazon S3 存储桶和图像的名称。
//Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
//PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.)
using System;
using System.Collections.Generic;
using Amazon.Rekognition;
using Amazon.Rekognition.Model;
public class AddFaces
{
public static void Example()
{
String collectionId = "MyCollection";
String bucket = "amzn-s3-demo-bucket";
String photo = "input.jpg";
AmazonRekognitionClient rekognitionClient = new AmazonRekognitionClient();
Image image = new Image()
{
S3Object = new S3Object()
{
Bucket = bucket,
Name = photo
}
};
IndexFacesRequest indexFacesRequest = new IndexFacesRequest()
{
Image = image,
CollectionId = collectionId,
ExternalImageId = photo,
DetectionAttributes = new List<String>(){ "ALL" }
};
IndexFacesResponse indexFacesResponse = rekognitionClient.IndexFaces(indexFacesRequest);
Console.WriteLine(photo + " added");
foreach (FaceRecord faceRecord in indexFacesResponse.FaceRecords)
Console.WriteLine("Face detected: Faceid is " +
faceRecord.Face.FaceId);
}
}
IndexFaces 操作请求
IndexFaces 的输入是要编入索引的图像和要向其中添加人脸的集合。
{
"CollectionId": "MyCollection",
"Image": {
"S3Object": {
"Bucket": "bucket",
"Name": "input.jpg"
}
},
"ExternalImageId": "input.jpg",
"DetectionAttributes": [
"DEFAULT"
],
"MaxFaces": 1,
"QualityFilter": "AUTO"
}
IndexFaces 操作响应
IndexFaces 返回有关在图像中检测到的人脸的信息。例如,以下 JSON 响应包含在输入图像中检测到的人脸的默认检测属性。此示例还显示未为人脸编制索引,因为已超出 MaxFaces 输入参数的值,Reasons 数组包含 EXCEEDS_MAX_FACES。如果因质量原因而未为人脸编制索引,Reasons 将包含 LOW_SHARPNESS 或 LOW_BRIGHTNESS 等值。有关更多信息,请参阅 UnindexedFace。
{
"FaceModelVersion": "3.0",
"FaceRecords": [
{
"Face": {
"BoundingBox": {
"Height": 0.3247932195663452,
"Left": 0.5055555701255798,
"Top": 0.2743072211742401,
"Width": 0.21444444358348846
},
"Confidence": 99.99998474121094,
"ExternalImageId": "input.jpg",
"FaceId": "b86e2392-9da1-459b-af68-49118dc16f87",
"ImageId": "09f43d92-02b6-5cea-8fbd-9f187db2050d"
},
"FaceDetail": {
"BoundingBox": {
"Height": 0.3247932195663452,
"Left": 0.5055555701255798,
"Top": 0.2743072211742401,
"Width": 0.21444444358348846
},
"Confidence": 99.99998474121094,
"Landmarks": [
{
"Type": "eyeLeft",
"X": 0.5751981735229492,
"Y": 0.4010535478591919
},
{
"Type": "eyeRight",
"X": 0.6511467099189758,
"Y": 0.4017036259174347
},
{
"Type": "nose",
"X": 0.6314528584480286,
"Y": 0.4710812568664551
},
{
"Type": "mouthLeft",
"X": 0.5879443287849426,
"Y": 0.5171778798103333
},
{
"Type": "mouthRight",
"X": 0.6444502472877502,
"Y": 0.5164633989334106
}
],
"Pose": {
"Pitch": -10.313642501831055,
"Roll": -1.0316886901855469,
"Yaw": 18.079818725585938
},
"Quality": {
"Brightness": 71.2919921875,
"Sharpness": 78.74752044677734
}
}
}
],
"OrientationCorrection": "",
"UnindexedFaces": [
{
"FaceDetail": {
"BoundingBox": {
"Height": 0.1329464465379715,
"Left": 0.5611110925674438,
"Top": 0.6832437515258789,
"Width": 0.08777777850627899
},
"Confidence": 92.37225341796875,
"Landmarks": [
{
"Type": "eyeLeft",
"X": 0.5796897411346436,
"Y": 0.7452847957611084
},
{
"Type": "eyeRight",
"X": 0.6078574657440186,
"Y": 0.742687463760376
},
{
"Type": "nose",
"X": 0.597953200340271,
"Y": 0.7620673179626465
},
{
"Type": "mouthLeft",
"X": 0.5884202122688293,
"Y": 0.7920381426811218
},
{
"Type": "mouthRight",
"X": 0.60627681016922,
"Y": 0.7919750809669495
}
],
"Pose": {
"Pitch": 15.658954620361328,
"Roll": -4.583454608917236,
"Yaw": 10.558992385864258
},
"Quality": {
"Brightness": 42.54612350463867,
"Sharpness": 86.93206024169922
}
},
"Reasons": [
"EXCEEDS_MAX_FACES"
]
}
]
}
要获取所有面部信息,对于 DetectionAttributes 请求参数,请指定“ALL”。例如,在以下示例响应中,记住 faceDetail 元素中的其他信息,这些信息不会保留在服务器上:
face 元素提供了服务器上保留的元数据。
FaceModelVersion 是与集合关联的人脸模型的版本。有关更多信息,请参阅 了解模型版本控制。
OrientationCorrection 是估计的图像方向。如果您使用的是高于版本 3 的人脸检测模型版本,则不会返回方向校正信息。有关更多信息,请参阅 获取图像方向和边界框坐标。
以下示例响应显示了指定 ["ALL"] 时返回的 JSON:
{
"FaceModelVersion": "3.0",
"FaceRecords": [
{
"Face": {
"BoundingBox": {
"Height": 0.06333333253860474,
"Left": 0.17185185849666595,
"Top": 0.7366666793823242,
"Width": 0.11061728745698929
},
"Confidence": 99.99999237060547,
"ExternalImageId": "input.jpg",
"FaceId": "578e2e1b-d0b0-493c-aa39-ba476a421a34",
"ImageId": "9ba38e68-35b6-5509-9d2e-fcffa75d1653"
},
"FaceDetail": {
"AgeRange": {
"High": 25,
"Low": 15
},
"Beard": {
"Confidence": 99.98077392578125,
"Value": false
},
"BoundingBox": {
"Height": 0.06333333253860474,
"Left": 0.17185185849666595,
"Top": 0.7366666793823242,
"Width": 0.11061728745698929
},
"Confidence": 99.99999237060547,
"Emotions": [
{
"Confidence": 95.40877532958984,
"Type": "HAPPY"
},
{
"Confidence": 6.6088080406188965,
"Type": "CALM"
},
{
"Confidence": 0.7385611534118652,
"Type": "SAD"
}
],
"EyeDirection": {
"yaw": 16.299732,
"pitch": -6.407457,
"confidence": 99.968704
}
"Eyeglasses": {
"Confidence": 99.96795654296875,
"Value": false
},
"EyesOpen": {
"Confidence": 64.0671157836914,
"Value": true
},
"Gender": {
"Confidence": 100,
"Value": "Female"
},
"Landmarks": [
{
"Type": "eyeLeft",
"X": 0.21361233294010162,
"Y": 0.757106363773346
},
{
"Type": "eyeRight",
"X": 0.2518567442893982,
"Y": 0.7599404454231262
},
{
"Type": "nose",
"X": 0.2262365221977234,
"Y": 0.7711842060089111
},
{
"Type": "mouthLeft",
"X": 0.2050037682056427,
"Y": 0.7801263332366943
},
{
"Type": "mouthRight",
"X": 0.2430567592382431,
"Y": 0.7836716771125793
},
{
"Type": "leftPupil",
"X": 0.2161938101053238,
"Y": 0.756662905216217
},
{
"Type": "rightPupil",
"X": 0.2523181438446045,
"Y": 0.7603650689125061
},
{
"Type": "leftEyeBrowLeft",
"X": 0.20066319406032562,
"Y": 0.7501518130302429
},
{
"Type": "leftEyeBrowUp",
"X": 0.2130996286869049,
"Y": 0.7480520606040955
},
{
"Type": "leftEyeBrowRight",
"X": 0.22584207355976105,
"Y": 0.7504606246948242
},
{
"Type": "rightEyeBrowLeft",
"X": 0.24509544670581818,
"Y": 0.7526801824569702
},
{
"Type": "rightEyeBrowUp",
"X": 0.2582615911960602,
"Y": 0.7516844868659973
},
{
"Type": "rightEyeBrowRight",
"X": 0.26881539821624756,
"Y": 0.7554477453231812
},
{
"Type": "leftEyeLeft",
"X": 0.20624476671218872,
"Y": 0.7568746209144592
},
{
"Type": "leftEyeRight",
"X": 0.22105035185813904,
"Y": 0.7582521438598633
},
{
"Type": "leftEyeUp",
"X": 0.21401576697826385,
"Y": 0.7553104162216187
},
{
"Type": "leftEyeDown",
"X": 0.21317370235919952,
"Y": 0.7584449648857117
},
{
"Type": "rightEyeLeft",
"X": 0.24393919110298157,
"Y": 0.7600628137588501
},
{
"Type": "rightEyeRight",
"X": 0.2598416209220886,
"Y": 0.7605880498886108
},
{
"Type": "rightEyeUp",
"X": 0.2519053518772125,
"Y": 0.7582084536552429
},
{
"Type": "rightEyeDown",
"X": 0.25177454948425293,
"Y": 0.7612871527671814
},
{
"Type": "noseLeft",
"X": 0.2185886949300766,
"Y": 0.774715781211853
},
{
"Type": "noseRight",
"X": 0.23328955471515656,
"Y": 0.7759330868721008
},
{
"Type": "mouthUp",
"X": 0.22446128726005554,
"Y": 0.7805567383766174
},
{
"Type": "mouthDown",
"X": 0.22087252140045166,
"Y": 0.7891407608985901
}
],
"MouthOpen": {
"Confidence": 95.87068939208984,
"Value": false
},
"Mustache": {
"Confidence": 99.9828109741211,
"Value": false
},
"Pose": {
"Pitch": -0.9409101605415344,
"Roll": 7.233824253082275,
"Yaw": -2.3602254390716553
},
"Quality": {
"Brightness": 32.01998519897461,
"Sharpness": 93.67259216308594
},
"Smile": {
"Confidence": 86.7142105102539,
"Value": true
},
"Sunglasses": {
"Confidence": 97.38925170898438,
"Value": false
}
}
}
],
"OrientationCorrection": "ROTATE_0"
"UnindexedFaces": []
}