

# Best practices for sensors, input images, and videos
<a name="best-practices"></a>

This section contains best practice information for using Amazon Rekognition. These best practices will help you get optimal performance out of the operations you invoke. If you are struggling to get the expected results from an operation, ensure you are following the best practices documented here.

For information regarding the latency of image operations, see the following: 
+ [Amazon Rekognition Image operation latency](operation-latency.md)

Facial comparision and faces search operations require you to follow specific best practices to find faces in an image. These requirements have also been documented in the following link:
+ [Recommendations for facial comparison input images](recommendations-facial-input-images.md)
+ [Recomendations for searching faces in a collection](recommendations-facial-input-images-search.md)

The following sections cover how to set up your camera for each type of media Amazon Rekognition is capable of analyzing:
+ [Recommendations for camera setup (image and video)](recommendations-camera-image-video.md)
+ [Recommendations for camera setup (stored and streaming video)](recommendations-camera-stored-streaming-video.md)
+ [Recommendations for camera setup (streaming video)](recommendations-camera-streaming-video.md)

 The Face Liveness operations also have their own best practices that should be followed to get the best performance from the Liveness check tool:
+ [Recommendations for Usage of Face Liveness](recommendations-liveness.md)

# Amazon Rekognition Image operation latency
<a name="operation-latency"></a>

To ensure the lowest possible latency for Amazon Rekognition Image operations, consider the following:
+ The Region for the Amazon S3 bucket that contains your images must match the Region you use for Amazon Rekognition Image API operations. 
+ Calling an Amazon Rekognition Image operation with image bytes is faster than uploading the image to an Amazon S3 bucket and then referencing the uploaded image in an Amazon Rekognition Image operation. Consider this approach if you are uploading images to Amazon Rekognition Image for near real-time processing. For example, images uploaded from an IP camera or images uploaded through a web portal.
+ If the image is already in an Amazon S3 bucket, referencing it in an Amazon Rekognition Image operation is probably faster than passing image bytes to the operation.

# Recommendations for facial comparison input images
<a name="recommendations-facial-input-images"></a>

The models used for face comparison operations are designed to work for a wide variety of poses, facial expressions, age ranges, rotations, lighting conditions, and sizes. We recommend that you use the following guidelines when choosing reference photos for [CompareFaces](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_CompareFaces.html) or for adding faces to a collection using [IndexFaces](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_IndexFaces.html).

## General recommendations for input images for face operations
<a name="recommendations-facial-input-images-general"></a>
+ Use images that are bright and sharp. Avoid using images that may be blurry due to subject and camera motion as much as possible. [DetectFaces](https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DetectFaces.html) can be used to determine the brightness and sharpness of a face.
+ For the purposes of gaze detection, it's recommended that you upload the original image at original size and quality.
+ Use an image with a face that is within the recommended range of angles. The pitch should be less than 30 degrees face down and less than 45 degrees face up. The yaw should be less than 45 degrees in either direction. There is no restriction on the roll.
+ Use an image of a face with both eyes open and visible.
+ Use an image of a face that is not obscured or tightly cropped. The image should contain the full head and shoulders of the person. It should not be cropped to the face bounding box.
+ Avoid items that block the face, such as headbands and masks.
+ Use an image of a face that occupies a large proportion of the image. Images where the face occupies a larger portion of the image are matched with greater accuracy. 
+ Ensure that images are sufficiently large in terms of resolution. Amazon Rekognition can recognize faces as small as 50 x 50 pixels in image resolutions up to 1920 x 1080. Higher-resolution images require a larger minimum face size. Faces larger than the minimum size provide a more accurate set of facial comparison results.
+ Use color images. 
+ Use images with flat lighting on the face, as opposed to varied lighting such as shadows. 
+ Use images that have sufficient contrast with the background. A high-contrast monochrome background works well.
+ Use images of faces with neutral facial expressions with mouth closed and little to no smile for applications that require high precision.

# Recomendations for searching faces in a collection
<a name="recommendations-facial-input-images-search"></a>
+ When searching for faces in a collection, ensure that recent face images are indexed. 
+ When creating a collection using `IndexFaces`, use multiple face images of an individual with different pitches and yaws (within the recommended range of angles). We recommend that at least five images of the person are indexed—straight on, face turned left with a yaw of 45 degrees or less, face turned right with a yaw of 45 degrees or less, face tilted down with a pitch of 30 degrees or less, and face tilted up with a pitch of 45 degrees or less. If you want to track that these face instances belong to the same individual, consider using the external image ID attribute if there is only one face in the image being indexed. For example, five images of John Doe can be tracked in the collection with external image IDs as `John_Doe_1.jpg, … John_Doe_5.jpg`.

# Recommendations for camera setup (image and video)
<a name="recommendations-camera-image-video"></a>

The following recommendations are in addition to [Recommendations for facial comparison input images](recommendations-facial-input-images.md). 

![\[Diagram showing the three axes of aircraft motion: pitch, roll, and yaw, with arrows indicating the direction of each axis around a gray human head icon.\]](http://docs.aws.amazon.com/rekognition/latest/dg/images/RPY-diagram.png)

+ Image Resolution – There is no minimum requirement for image resolution, as long as the face resolution is 50 x 50 pixels for images with a total resolution up to 1920 x 1080. Higher-resolution images require a larger minimum face size.
**Note**  
The preceding recommendation is based on the native resolution of the camera. Generating a high-resolution image from a low-resolution image does not produce the results needed for face search (due to artifacts generated by the up-sampling of the image). 
+ Camera Angle – There are three measurements for camera angle—pitch, roll, and yaw.
  + Pitch – We recommend a pitch of less than 30 degrees when the camera is facing down and less than 45 degrees when the camera is facing up.
  + Roll – There isn’t a minimum requirement for this parameter. Amazon Rekognition can handle any amount of roll.
  + Yaw – We recommend a yaw of less than 45 degrees in either direction. 

  The face angle along any axis that is captured by the camera is a combination of both the camera angle facing the scene and the angle at which the subject’s head is in the scene. For example, if the camera is 30 degrees facing down and the person has their head down a further 30 degrees, the actual face pitch as seen by the camera is 60 degrees. In this case, Amazon Rekognition would not be able to recognize the face. We recommend setting up cameras such that the camera angles are based on the assumption that people are generally looking into the camera with the overall pitch (combination of face and camera) at 30 degrees or less.
+ Camera Zoom – The recommended minimum face resolution of 50 x 50 pixels should drive this camera setting. We recommend using the zoom setting of a camera so that the desired faces are at a resolution no less than 50 x 50 pixels.
+ Camera Height – The recommended camera pitch should drive this parameter. 

# Recommendations for camera setup (stored and streaming video)
<a name="recommendations-camera-stored-streaming-video"></a>

The following recommendations are in addition to [Recommendations for camera setup (image and video)](recommendations-camera-image-video.md).
+ The codec should be h.264 encoded.
+ The recommended frame rate is 30 fps. (It should not be less than 5 fps.)
+ The recommended encoder bitrate is 3 Mbps. (It should not be less than 1.5 Mbps.)
+ Frame Rate vs. Frame Resolution – If the encoder bitrate is a constraint, we recommend favoring a higher frame resolution over a higher frame rate for better face search results. This ensures that Amazon Rekognition gets the best quality frame within the allocated bitrate. However, there is a downside to this. Because of the low frame rate, the camera misses fast motion in a scene. It's important to understand the trade-offs between these two parameters for a given setup. For example, if the maximum possible bitrate is 1.5 Mbps, a camera can capture 1080p at 5 fps or 720p at 15 fps. The choice between the two is application dependent, as long as the recommended face resolution of 50 x 50 pixels is met.

# Recommendations for camera setup (streaming video)
<a name="recommendations-camera-streaming-video"></a>



The following recommendation is in addition to [Recommendations for camera setup (stored and streaming video)](recommendations-camera-stored-streaming-video.md).

An additional constraint with streaming applications is internet bandwidth. For live video, Amazon Rekognition only accepts Amazon Kinesis Video Streams as an input. You should understand the dependency between the encoder bitrate and the available network bandwidth. Available bandwidth should, at a minimum, support the same bitrate that the camera is using to encode the live stream. This ensures that whatever the camera captures is relayed through Amazon Kinesis Video Streams. If the available bandwidth is less than the encoder bitrate, Amazon Kinesis Video Streams drops bits based on the network bandwidth. This results in low video quality. 

A typical streaming setup involves connecting multiple cameras to a network hub that relays the streams. In this case, the bandwidth should accommodate the cumulative sum of the streams coming from all cameras connected to the hub. For example, if the hub is connected to five cameras encoding at 1.5 Mbps, the available network bandwidth should be at least 7.5 Mbps. To ensure that there are no dropped packets, you should consider keeping the network bandwidth higher than 7.5 Mbps to accommodate for jitters due to dropped connections between a camera and the hub. The actual value depends on the reliability of the internal network.

# Recommendations for Usage of Face Liveness
<a name="recommendations-liveness"></a>

We recommend the following best practices when using Rekognition Face Liveness:
+ Users should complete the Face Liveness check in environments that aren’t too dark or too bright and have fairly uniform lighting. 
+ Users should increase their display screen's brightness to its maximum level when making checks on web browsers. Mobile Native SDKs adjust the display brightness automatically. 
+ Choose a confidence score threshold that reflects the nature of your use case. For use cases with greater security concerns, use a high threshold. 
+ Regularly run human review checks on audit images to make sure that spoof attacks are mitigated at the confidence threshold you set. 
+ Offer an alternative face liveness verification path to your users if they are photo-sensitive or do not want to verify their face liveness using Rekognition. 
+ Do not send or display the liveness check score on the user application. Only send a pass or fail signal.
+ Allow only five failed liveness checks in three minutes from a single device. After five fails, timeout the user for 30–60 minutes. If the pattern is seen 3–5 times repeatedly, block the user device from making additional calls.
+ Implement the get-ready screen in your workflow so that users can more easily pass the Face Liveness checks.
+ You are responsible for providing legally adequate privacy notices to, and obtaining any necessary consent from, your End Users for the processing, storage, use, and transfer of content by Face Liveness.