

# Using the AWS SDK for Rust
<a name="using"></a>

Learn common and recommended ways of using the AWS SDK for Rust to work with AWS services.

**Topics**
+ [Making service requests](make-request.md)
+ [Best practices](best-practices.md)
+ [Concurrency](concurrency.md)
+ [Creating Lambda functions](lambda.md)
+ [Creating presigned URLs](presigned-urls.md)
+ [Handling errors](error-handling.md)
+ [Pagination](paginating.md)
+ [Unit testing](testing.md)
+ [Waiters](waiters.md)

# Making AWS service requests using the AWS SDK for Rust
<a name="make-request"></a>

 To programmatically access AWS services, the AWS SDK for Rust uses a client struct for each AWS service. For example, if your application needs to access Amazon EC2, your application creates an Amazon EC2 client struct to interface with that service. You then use the service client to make requests to that AWS service. 

To make a request to an AWS service, you must first create and [configure](configure.md) a service client. For each AWS service your code uses, it has its own crate and its own dedicated type for interacting with it. The client exposes one method for each API operation exposed by the service. 

 To interact with AWS services in AWS SDK for Rust, create a service-specific client, use its API methods with fluent builder-style chaining, and call `send()` to execute the request.

The `Client` exposes one method for each API operation exposed by the service. The return value of each of these methods is a "fluent builder", where different inputs for that API are added by builder-style function call chaining. After calling the service's methods, call `send()` to get a [https://doc.rust-lang.org/nightly/core/future/trait.Future.html](https://doc.rust-lang.org/nightly/core/future/trait.Future.html) that will result in either a successful output or a `SdkError`. For more information on `SdkError`, see [Handling errors in the AWS SDK for Rust](error-handling.md). 

The following example demonstrates a basic operation using Amazon S3 to create a bucket in the `us-west-2` AWS Region: 

```
let config = aws_config::defaults(BehaviorVersion::latest())
    .load()
    .await;
  
let s3 = aws_sdk_s3::Client::new(&config);
  
let result = s3.create_bucket()
    // Set some of the inputs for the operation.
    .bucket("my-bucket")
    .create_bucket_configuration(
        CreateBucketConfiguration::builder()
            .location_constraint(aws_sdk_s3::types::BucketLocationConstraint::UsWest2)
            .build()
        )
    // send() returns a Future that does nothing until awaited.
    .send()
    .await;
```

Each service crate has additional modules used for API inputs, such as the following: 
+ The `types` module has structs or enums to provide more complex structured information.
+  The `primitives` module has simpler types for representing data such as date times or binary blobs.

 See the [API reference documentation](https://awslabs.github.io/aws-sdk-rust/) for the service crate for more detailed crate organization and information. For example, the `aws-sdk-s3` crate for the Amazon Simple Storage Service has several [Modules](https://docs.rs/aws-sdk-s3/latest/aws_sdk_s3/#modules). Two of which are:
+ [https://docs.rs/aws-sdk-s3/latest/aws_sdk_s3/types/index.html](https://docs.rs/aws-sdk-s3/latest/aws_sdk_s3/types/index.html)
+ [https://docs.rs/aws-sdk-s3/latest/aws_sdk_s3/primitives/index.html](https://docs.rs/aws-sdk-s3/latest/aws_sdk_s3/primitives/index.html)

# Best practices for using AWS SDK for Rust
<a name="best-practices"></a>

The following are best practices for using the AWS SDK for Rust. 

## Reuse SDK clients when possible
<a name="bp-reuseClient"></a>

Depending on how an SDK client is constructed, creating a new client may result in each client maintaining its own HTTP connection pools, identity caches, and so on. We recommend sharing a client or at least sharing `SdkConfig` to avoid the overhead of expensive resource creation. All SDK clients implement `Clone` as a single atomic reference count update. 

## Configure API timeouts
<a name="bp-apiTimeouts"></a>

 The SDK provides default values for some timeout options, such as connection timeout and socket timeouts, but not for API call timeouts or individual API call attempts. It is a good practice to set timeouts for both the individual attempt and the entire request. This will ensure your application fails fast in an optimal way when there are transient issues that could cause request attempts to take longer to complete or fatal network issues. 

For more information on configuring operation timeouts, see [Configuring timeouts in the AWS SDK for Rust](timeouts.md). 

# Concurrency in the AWS SDK for Rust
<a name="concurrency"></a>

The AWS SDK for Rust doesn't provide concurrency control but users have many options for implementing their own.

## Terms
<a name="conc-terms"></a>

Terms related to this subject are easy to confuse and some terms have become synonyms even though they originally represented separate concepts. In this guide, we'll define the following:
+  **Task**: Some "unit of work" that your program will run to completion, or attempt to run to completion. 
+  **Sequential Computing**: When several tasks are executed one after another. 
+  **Concurrent Computing**: When several tasks are executed in overlapping time periods.
+  **Concurrency**: The ability of a computer to complete multiple tasks in an arbitrary order. 
+  **Multitasking**: The ability of a computer to run several tasks concurrently. 
+  **Race Condition**: When the behavior of your program changes based on when a task is started or how long it takes to process a task. 
+  **Contention**: Conflict over access to a shared resource. When two or more tasks want to access a resource at the same time, that resource is "in contention". 
+  **Deadlock**: A state in which no more progress can be made. This typically happens because two tasks want to acquire each other's resources but neither task will release their resource until the other's resource becomes available. Deadlocks lead to a program becoming partly or completely unresponsive. 

## A simple example
<a name="conc-simple"></a>

Our first example is a sequential program. In later examples, we'll change this code using concurrency techniques. Later examples reuse the same `build_client_and_list_objects_to_download()` method and make changes within `main()`. Run the following commands to add dependencies to your project:
+ `cargo add aws-sdk-s3`
+ `cargo add aws-config tokio --features tokio/full`

The following example task is to download all the files in an Amazon Simple Storage Service bucket:

1.  Start by listing all the files. Save the keys in a list. 

1.  Iterate over the list, downloading each file in turn 

```
use aws_sdk_s3::{Client, Error};
const EXAMPLE_BUCKET: &str = "amzn-s3-demo-bucket";  // Update to name of bucket you own.

// This initialization function won't be reproduced in
// examples following this one, in order to save space.
async fn build_client_and_list_objects_to_download() -> (Client, Vec<String>) {
    let cfg = aws_config::load_defaults(aws_config::BehaviorVersion::latest()).await;
    let client = Client::new(&cfg);
    let objects_to_download: Vec<_> = client
        .list_objects_v2()
        .bucket(EXAMPLE_BUCKET)
        .send()
        .await
        .expect("listing objects succeeds")
        .contents()
        .into_iter()
        .flat_map(aws_sdk_s3::types::Object::key)
        .map(ToString::to_string)
        .collect();
         
    (client, objects_to_download)
}
```

```
#[tokio::main]
async fn main() {
    let (client, objects_to_download) =
        build_client_and_list_objects_to_download().await;
    
    for object in objects_to_download {
        let res = client
            .get_object()
            .key(&object)
            .bucket(EXAMPLE_BUCKET)
            .send()
            .await
            .expect("get_object succeeds");
        let body = res.body.collect().await.expect("reading body succeeds").into_bytes();
        std::fs::write(object, body).expect("write succeeds");
    }
}
```

**Note**  
 In these examples, we won't be handling errors, and we assume that the example bucket has no objects with keys that look like file paths. Thus, we won't cover creating nested directories.

Because of the architecture of modern computers, we can rewrite this program to be much more efficient. We'll do that in a later example, but first, let's learn a few more concepts.

## Ownership and mutability
<a name="conc-ownership"></a>

Each value in Rust has a single owner. When an owner goes out of scope, all values it owns will also be dropped. The owner can provide either one or more immutable references to a value **or** a single mutable reference. The Rust compiler is responsible for ensuring that no reference outlives its owner.

Additional planning and design is needed when multiple tasks need to mutably access the same resource. In sequential computing, each task can mutably access the same resource without contention because they run one after another in a sequence. However, in concurrent computing, tasks can run in any order, and at the same time. Therefore, we must do more to prove to the compiler that multiple mutable references are impossible (or at least to crash if they do occur).

The Rust standard library provides many tools to help us accomplish this. For more information on these topics, see [Variables and Mutability](https://doc.rust-lang.org/book/ch03-01-variables-and-mutability.html) and [Understanding Ownership](https://doc.rust-lang.org/book/ch04-00-understanding-ownership.html) in The Rust Programming Language book.

## More terms\$1
<a name="conc-moreTerms"></a>

The following are lists of "synchronization objects". Altogether, they are the tools necessary to convince the compiler that our concurrent program won't break ownership rules. 

 [https://doc.rust-lang.org/std/sync/index.html](https://doc.rust-lang.org/std/sync/index.html): 
+ [https://doc.rust-lang.org/std/sync/struct.Arc.html](https://doc.rust-lang.org/std/sync/struct.Arc.html): An ***A**tomically **R**eference-**C**ounted* pointer. When data is wrapped in an `Arc`, it can be shared freely, without worrying about any specific owner dropping the value early. In this sense, the ownership of the value becomes "shared". Values within an `Arc` cannot be mutable, but might have [interior mutability](https://doc.rust-lang.org/reference/interior-mutability.html). 
+ [https://doc.rust-lang.org/std/sync/struct.Barrier.html](https://doc.rust-lang.org/std/sync/struct.Barrier.html): Ensures multiple threads will wait for each other to reach a point in the program, before continuing execution all together. 
+ [https://doc.rust-lang.org/std/sync/struct.Condvar.html](https://doc.rust-lang.org/std/sync/struct.Condvar.html): a ***Cond**ition **Var**iable* providing the ability to block a thread while waiting for an event to occur. 
+ [https://doc.rust-lang.org/std/sync/struct.Mutex.html](https://doc.rust-lang.org/std/sync/struct.Mutex.html): a ***Mut**ual **Ex**clusion* mechanism that ensures that at most one thread at a time is able to access some data. Generally speaking, a `Mutex` lock should never be held across an `.await` point in the code. 

 [https://docs.rs/tokio/latest/tokio/sync/index.html](https://docs.rs/tokio/latest/tokio/sync/index.html): 

While the AWS SDKs are intended to be `async`-runtime-agnostic, we recommend the use of `tokio` synchronization objects for specific cases.
+ [https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html](https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html): Similar to the standard library's `Mutex`, but with a slightly higher cost. Unlike the standard `Mutex`, this one can be held across an `.await` point in the code.
+ [https://docs.rs/tokio/latest/tokio/sync/struct.Semaphore.html](https://docs.rs/tokio/latest/tokio/sync/struct.Semaphore.html): A variable used to control access to a common resource by multiple tasks.

## Rewriting our example to be more efficient (single-threaded concurrency)
<a name="conc_singleThread"></a>

In the following modified example, we use [https://docs.rs/futures-util/latest/futures_util/future/fn.join_all.html](https://docs.rs/futures-util/latest/futures_util/future/fn.join_all.html) to run **ALL** `get_object` requests concurrently. Run the following command to add a new dependency to your project:
+ `cargo add futures-util`

```
#[tokio::main]
async fn main() {
    let (client, objects_to_download) =
        build_client_and_list_objects_to_download().await;
        
    let get_object_futures = objects_to_download.into_iter().map(|object| {
        let req = client
            .get_object()
            .key(&object)
            .bucket(EXAMPLE_BUCKET);

        async {
            let res = req
                .send()
                .await
                .expect("get_object succeeds");
            let body = res.body.collect().await.expect("body succeeds").into_bytes();
           // Note that we MUST use the async runtime's preferred way
           // of writing files. Otherwise, this call would block,
           // potentially causing a deadlock.
            tokio::fs::write(object, body).await.expect("write succeeds");
        }
    });

    futures_util::future::join_all(get_object_futures).await;
}
```

 This is the simplest way to benefit from concurrency, but it also has a few issues that might not be obvious at first glance:

1.  We create all the request inputs at the same time. If we don't have enough memory to hold all the `get_object` request inputs then we'll run into an "out-of-memory" allocation error. 

1.  We create and await all the futures at the same time. Amazon S3 throttles requests if we try to download too much at once. 

To fix both of these issues, we must limit the amount of requests that we're sending at any one time. We'll do this with a `tokio` [semaphore](https://docs.rs/tokio/latest/tokio/sync/struct.Semaphore.html):

```
use std::sync::Arc;
use tokio::sync::Semaphore;
const CONCURRENCY_LIMIT: usize = 50; 

#[tokio::main(flavor = "current_thread")]
async fn main() {
    let (client, objects_to_download) =
        build_client_and_list_objects_to_download().await;
    let concurrency_semaphore = Arc::new(Semaphore::new(CONCURRENCY_LIMIT));

    let get_object_futures = objects_to_download.into_iter().map(|object| {
        // Since each future needs to acquire a permit, we need to clone
        // the Arc'd semaphore before passing it in.
        let semaphore = concurrency_semaphore.clone();
        // We also need to clone the client so each task has its own handle.
        let client = client.clone();
        async move {
            let permit = semaphore
                .acquire()
                .await
                .expect("we'll get a permit if we wait long enough");
            let res = client
                .get_object()
                .key(&object)
                .bucket(EXAMPLE_BUCKET)
                .send()
                .await
                .expect("get_object succeeds");
            let body = res.body.collect().await.expect("body succeeds").into_bytes();
            tokio::fs::write(object, body).await.expect("write succeeds");
            std::mem::drop(permit);
        }
    });

    futures_util::future::join_all(get_object_futures).await;
}
```

We've fixed the potential memory usage issue by moving the request creation into the `async` block. This way, requests won't be created until it's time to send them. 

**Note**  
 If you have the memory for it, it might be more efficient to create all your request inputs at once and hold them in memory until they're ready to be sent. To try this, move request input creation outside of the `async` block. 

 We've also fixed the issue of sending too many requests at once by limiting requests in flight to `CONCURRENCY_LIMIT`. 

**Note**  
 The right value for `CONCURRENCY_LIMIT` is different for every project. When constructing and sending your own requests, try to set it as high as you can without running into throttling errors. While it's possible to dynamically update your concurrency limit based on the ratio of successful to throttled responses that a service sends back, that's outside the scope of this guide due to its complexity. 

## Rewriting our example to be more efficient (multi-threaded concurrency)
<a name="conc-multiThread"></a>

 In the previous two examples, we performed our requests concurrently. While this is more efficient than running them synchronously, we can make things even more efficient by using multi-threading. To do this with `tokio`, we'll need to spawn them as separate tasks. 

**Note**  
 This example requires that you use the multi-threaded `tokio` runtime. This runtime is gated behind the `rt-multi-thread` feature. And, of course, you'll need to run your program on a multi-core machine. 

Run the following command to add a new dependency to your project:
+ `cargo add tokio --features=rt-multi-thread`

```
// Set this based on the amount of cores your target machine has.
const THREADS: usize = 8; 

#[tokio::main(flavor = "multi_thread")]
async fn main() {
    let (client, objects_to_download) =
        build_client_and_list_objects_to_download().await;
    let concurrency_semaphore = Arc::new(Semaphore::new(THREADS));

    let get_object_task_handles = objects_to_download.into_iter().map(|object| {
        // Since each future needs to acquire a permit, we need to clone
        // the Arc'd semaphore before passing it in.
        let semaphore = concurrency_semaphore.clone();
        // We also need to clone the client so each task has its own handle.
        let client = client.clone();
        
        // Note this difference! We're using `tokio::task::spawn` to
        // immediately begin running these requests.
        tokio::task::spawn(async move {
            let permit = semaphore
                .acquire()
                .await
                .expect("we'll get a permit if we wait long enough");
            let res = client
                .get_object()
                .key(&object)
                .bucket(EXAMPLE_BUCKET)
                .send()
                .await
                .expect("get_object succeeds");
            let body = res.body.collect().await.expect("body succeeds").into_bytes();
            tokio::fs::write(object, body).await.expect("write succeeds");
            std::mem::drop(permit);
        })
    });

    futures_util::future::join_all(get_object_task_handles).await;
}
```

Dividing work into tasks can be complex. Doing I/O (*input/output*) is typically blocking. Runtimes might struggle to balance the needs of long-running tasks with those of short-running tasks. Whatever runtime you choose, be sure to read their recommendations for the most efficient way to divide your work into tasks. For the `tokio` runtime recommendations, see [Module `tokio::task`](https://docs.rs/tokio/latest/tokio/task/index.html).

## Debugging multi-threaded apps
<a name="conc-debug"></a>

Tasks running concurrently can be run in any order. As such, the logs of concurrent programs can very difficult to read. In the SDK for Rust, we recommend using the `tracing` logging system. It can group logs with their specific tasks, no matter when they're running. For guidance, see [Configuring and using logging in the AWS SDK for Rust](logging.md). 

A very useful tool for identifying tasks that have locked up is [https://github.com/tokio-rs/console](https://github.com/tokio-rs/console), which is a diagnostic and debugging tool for asynchronous Rust programs. By instrumenting and running your program, and then running the `tokio-console` app, you can see a live view of the tasks your program is running. This view includes helpful information like the amount of time a task has spent waiting to acquire shared resources or the amount of times it has been polled. 

# Creating Lambda functions in the AWS SDK for Rust
<a name="lambda"></a>

For detailed documentation on developing AWS Lambda functions with the AWS SDK for Rust, see [Building Lambda functions with Rust](https://docs.aws.amazon.com/lambda/latest/dg/lambda-rust.html) in the *AWS Lambda Developer Guide*. That documentation guides you through using:
+ The Rust Lambda runtime client crate for core functionality, [https://github.com/awslabs/aws-lambda-rust-runtime](https://github.com/awslabs/aws-lambda-rust-runtime). 
+ The recommended command-line tool for deploying the Rust function binary to Lambda with [Cargo Lambda](https://www.cargo-lambda.info/guide/what-is-cargo-lambda.html). 

In addition to the guided examples that are in the *AWS Lambda Developer Guide*, there are also Lambda calculator example available in the [AWS SDK Code Examples Repository](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/rustv1/lambda) on GitHub.

# Creating presigned URLs using the AWS SDK for Rust
<a name="presigned-urls"></a>

 You can presign requests for some AWS API operations so that another caller can use the request later without presenting their own credentials. 

 For example, assume that Jane has access to an Amazon Simple Storage Service (Amazon S3) object and she wants to temporarily share object access with Alejandro. Jane can generate a presigned `GetObject` request to share with Alejandro so that he can download the object without requiring access to Jane's credentials or having any of his own. The credentials used by the presigned URL are Jane's because she is the AWS user who generated the URL.

To learn more about presigned URLs in Amazon S3, see [Working with presigned URLs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-url.html) in the *Amazon Simple Storage Service User Guide*.

## Presigning basics
<a name="presign-basics"></a>

 The AWS SDK for Rust provides a `presigned()` method on operation fluent-builders that can be used to get a presigned request. 

 The following example creates a presigned `GetObject` request for Amazon S3. The request is valid for 5 minutes after creation. 

```
use std::time::Duration;
use aws_config::BehaviorVersion;
use aws_sdk_s3::presigning::PresigningConfig;

let config = aws_config::defaults(BehaviorVersion::latest())
    .load()
    .await;

let s3 = aws_sdk_s3::Client::new(&config);

let presigned = s3.get_object()
    .presigned(
        PresigningConfig::builder()
            .expires_in(Duration::from_secs(60 * 5))
            .build()
            .expect("less than one week")
    )
    .await?;
```

 The `presigned()` method returns a `Result<PresignedRequest, SdkError<E, R>>`. 

The returned `PresignedRequest` contains methods to get at the components of an HTTP request including the method, URI, and any headers. All of these need to be sent to the service, if present, for the request to be valid. Many presigned requests can be represented by the URI alone though. 

## Presigning `POST` and `PUT` requests
<a name="presign-post-put"></a>

 Many operations that are presignable require only a URL and must be sent as HTTP `GET` requests. Some operations, however, take a body and must be sent as an HTTP `POST` or HTTP `PUT` request along with headers in some cases. Presigning these requests is identical to presigning `GET` requests, but invoking the presigned request is more complicated. 

 The following is an example of presigning an Amazon S3 `PutObject` request and converting it into an [https://docs.rs/http/latest/http/request/struct.Request.html](https://docs.rs/http/latest/http/request/struct.Request.html) which can be sent using an HTTP client of your choosing. 

To use the `into_http_1x_request()` method, add the `http-1x` feature to your `aws-sdk-s3` crate in your `Cargo.toml` file:

```
aws-sdk-s3 = { version = "1", features = ["http-1x"] }
```

Source file:

```
let presigned = s3.put_object()
    .presigned(
        PresigningConfig::builder()
            .expires_in(Duration::from_secs(60 * 5))
            .build()
            .expect("less than one week")
    )
    .await?;


let body = "Hello AWS SDK for Rust";
let http_req = presigned.into_http_1x_request(body);
```

## Standalone Signer
<a name="standalone-signer"></a>

**Note**  
This is an advanced use case. It isn't needed or recommended for most users.

There are a few use cases where it is necessary to create a signed request outside of the SDK for Rust context. For that you can use the [https://docs.rs/aws-sigv4/latest/aws_sigv4/index.html](https://docs.rs/aws-sigv4/latest/aws_sigv4/index.html) crate independently from the SDK. 

 The following is an example to demonstrate the basic elements, see the crate documentation for more details. 

Add the `aws-sigv4` and `http` crates to your `Cargo.toml` file:

```
[dependencies]
aws-sigv4 = "1"
http = "1"
```

Source file:

```
use aws_smithy_runtime_api::client::identity::Identity;
use aws_sigv4::http_request::{sign, SigningSettings, SigningParams, SignableRequest};
use aws_sigv4::sign::v4;
use std::time::SystemTime;

// Set up information and settings for the signing.
// You can obtain credentials from `SdkConfig`.
let identity = Credentials::new(
    "AKIDEXAMPLE",
    "wJalrXUtnFEMI/K7MDENG+bPxRfiCYEXAMPLEKEY",
    None,
    None,
    "hardcoded-credentials").into();

let settings = SigningSettings::default();

let params = v4::SigningParams::builder()
    .identity(&identity)
    .region("us-east-1")
    .name("service")
    .time(SystemTime::now())
    .settings(settings)
    .build()?
    .into();

// Convert the HTTP request into a signable request.
let signable = SignableRequest::new(
    "GET",
    "https://some-endpoint.some-region.amazonaws.com",
    std::iter::empty(),
    SignableBody::UnsignedPayload
)?;

// Sign and then apply the signature to the request.
let (signing_instructions, _signature) = sign(signable, &params)?.into_parts();

let mut my_req = http::Request::new("...");
signing_instructions.apply_to_request_http1x(&mut my_req);
```

# Handling errors in the AWS SDK for Rust
<a name="error-handling"></a>

Understanding how and when the AWS SDK for Rust returns errors is important to building high-quality applications using the SDK. The following sections describe the different errors you might encounter from the SDK and how to handle them appropriately. 

Every operation returns a `Result` type with the error type set to [https://docs.rs/aws-smithy-runtime-api/latest/aws_smithy_runtime_api/client/result/enum.SdkError.html](https://docs.rs/aws-smithy-runtime-api/latest/aws_smithy_runtime_api/client/result/enum.SdkError.html). `SdkError` is an enum with several possible types, called variants.

## Service errors
<a name="serviceErrors"></a>

The most common type of error is [https://docs.rs/aws-smithy-runtime-api/latest/aws_smithy_runtime_api/client/result/enum.SdkError.html#variant.ServiceError](https://docs.rs/aws-smithy-runtime-api/latest/aws_smithy_runtime_api/client/result/enum.SdkError.html#variant.ServiceError). This error represents an error response from an AWS service. For example, if you try to get an object from Amazon S3 that doesn't exist, Amazon S3 returns an error response.

When you encounter an `SdkError::ServiceError` it means that your request was successfully sent to the AWS service but could not be processed. This can be because of errors in the request's parameters or because of issues on the service side. 

 The error response details are included in the error variant. The following example shows how to conveniently get at the underlying `ServiceError` variant and handle different error cases:

```
// Needed to access the '.code()' function on the error type:
use aws_sdk_s3::error::ProvideErrorMetadata;

let result = s3.get_object()
    .bucket("my-bucket")
    .key("my-key")
    .send()
    .await;

match result {
    Ok(_output) => { /* Success. Do something with the output. */ }
    Err(err) => match err.into_service_error() {
        GetObjectError::InvalidObjectState(value) =>  {
            println!("invalid object state: {:?}", value);
        }
        GetObjectError::NoSuchKey(_) => {
            println!("object didn't exist");
        }
        // err.code() returns the raw error code from the service and can be 
        //     used as a last resort for handling unmodeled service errors. 
        err if err.code() == Some("SomeUnmodeledError") => {}
        err => return Err(err.into())
    }
};
```

## Error metadata
<a name="errorMetadata"></a>

 Every service error has additional metadata that can be accessed by importing service-specific traits. 
+ The `<service>::error::ProvideErrorMetadata` trait provides access to any available underlying raw error code and error message returned from the service.
  + For Amazon S3, this trait is [https://docs.rs/aws-sdk-s3/latest/aws_sdk_s3/error/trait.ProvideErrorMetadata.html](https://docs.rs/aws-sdk-s3/latest/aws_sdk_s3/error/trait.ProvideErrorMetadata.html). 

You can also get information that might be useful when troubleshooting service errors:
+ The `<service>::operation::RequestId` trait adds extension methods to retrieve the unique AWS request ID that was generated by the service. 
  + For Amazon S3, this trait is [https://docs.rs/aws-sdk-s3/latest/aws_sdk_s3/operation/trait.RequestId.html](https://docs.rs/aws-sdk-s3/latest/aws_sdk_s3/operation/trait.RequestId.html).
+ The `<service>::operation::RequestIdExt` trait adds the `extended_request_id()` method to get an additional, extended request ID. 
  + Only supported by some services.
  +  For Amazon S3, this trait is [https://docs.rs/aws-sdk-s3/latest/aws_sdk_s3/operation/trait.RequestIdExt.html](https://docs.rs/aws-sdk-s3/latest/aws_sdk_s3/operation/trait.RequestIdExt.html).

## Detailed error printing with `DisplayErrorContext`
<a name="displayErrorContext"></a>

 Errors in the SDK are generally the result of a chain of failures such as:

1. Dispatching a request has failed because the connector returned an error.

1. The connector returned an error because the credentials provider returned an error. 

1. The credentials provider returned an error because it called a service and that service returned an error.

1. The service returned an error because the credentials request didn't have the correct authorization.

By default, display of this error only outputs "dispatch failure". This lacks details that help troubleshoot the error. The SDK for Rust provides a simple error reporter called `DisplayErrorContext`.
+  The `<service>::error::DisplayErrorContext` struct adds functionality to output the full error context.
  + For Amazon S3, this struct is [https://docs.rs/aws-sdk-s3/latest/aws_sdk_s3/error/struct.DisplayErrorContext.html](https://docs.rs/aws-sdk-s3/latest/aws_sdk_s3/error/struct.DisplayErrorContext.html).

When we wrap the error to be displayed and print it, `DisplayErrorContext` provides a much more detailed message similar to the following:

```
dispatch failure: other: Session token not found or invalid.
DispatchFailure(
    DispatchFailure { 
        source: ConnectorError { 
            kind: Other(None), 
            source: ProviderError(
                ProviderError { 
                    source: ProviderError(
                        ProviderError { 
                            source: ServiceError(
                                ServiceError { 
                                    source: UnauthorizedException(
                                        UnauthorizedException { 
                                            message: Some("Session token not found or invalid"), 
                                            meta: ErrorMetadata { 
                                                code: Some("UnauthorizedException"), 
                                                message: Some("Session token not found or invalid"), 
                                                extras: Some({"aws_request_id": "1b6d7476-f5ec-4a16-9890-7684ccee7d01"})
                                            } 
                                        }
                                    ), 
                                    raw: Response {
                                        status: StatusCode(401), 
                                        headers: Headers {
                                            headers: {
                                                "date": HeaderValue { _private: H0("Thu, 04 Jul 2024 07:41:21 GMT") }, 
                                                "content-type": HeaderValue { _private: H0("application/json") }, 
                                                "content-length": HeaderValue { _private: H0("114") }, 
                                                "access-control-expose-headers": HeaderValue { _private: H0("RequestId") }, 
                                                "access-control-expose-headers": HeaderValue { _private: H0("x-amzn-RequestId") }, 
                                                "requestid": HeaderValue { _private: H0("1b6d7476-f5ec-4a16-9890-7684ccee7d01") }, 
                                                "server": HeaderValue { _private: H0("AWS SSO") }, 
                                                "x-amzn-requestid": HeaderValue { _private: H0("1b6d7476-f5ec-4a16-9890-7684ccee7d01") }
                                            } 
                                        }, 
                                        body: SdkBody {
                                            inner: Once(
                                                Some(
                                                    b"{
                                                        \"message\":\"Session token not found or invalid\",
                                                        \"__type\":\"com.amazonaws.switchboard.portal#UnauthorizedException\"}"
                                                    )
                                                ), 
                                            retryable: true 
                                        }, 
                                        extensions: Extensions {
                                            extensions_02x: Extensions, 
                                            extensions_1x: Extensions 
                                        }
                                    } 
                                }
                            ) 
                        }
                    ) 
                }
            ), 
            connection: Unknown 
        } 
    }
)
```

# Using paginated results in the AWS SDK for Rust
<a name="paginating"></a>

Many AWS operations return truncated results when the payload is too large to return in a single response. Instead, the service returns a portion of the data and a token to retrieve the next set of items. This pattern is known as pagination. 

The AWS SDK for Rust includes extension methods `into_paginator` on operation builders that can be used to automatically paginate the results for you. You only have to write the code that processes the results. All pagination operation builders have an `into_paginator()` method available that exposes a [https://docs.rs/aws-smithy-async/latest/aws_smithy_async/future/pagination_stream/struct.PaginationStream.html](https://docs.rs/aws-smithy-async/latest/aws_smithy_async/future/pagination_stream/struct.PaginationStream.html) to paginate over the results.
+ In Amazon S3, one example of this is [https://docs.rs/aws-sdk-s3/latest/aws_sdk_s3/operation/list_objects_v2/builders/struct.ListObjectsV2FluentBuilder.html#method.into_paginator](https://docs.rs/aws-sdk-s3/latest/aws_sdk_s3/operation/list_objects_v2/builders/struct.ListObjectsV2FluentBuilder.html#method.into_paginator). 

The following examples use Amazon Simple Storage Service. However, the concepts are the same for any service that has one or more paginated APIs. 

 The following code example shows the simplest example that uses the [https://docs.rs/aws-smithy-async/latest/aws_smithy_async/future/pagination_stream/struct.PaginationStream.html#method.try_collect](https://docs.rs/aws-smithy-async/latest/aws_smithy_async/future/pagination_stream/struct.PaginationStream.html#method.try_collect) method to collect all paginated results into a `Vec`: 

```
let config = aws_config::defaults(BehaviorVersion::latest())
    .load()
    .await;

let s3 = aws_sdk_s3::Client::new(&config);

let all_objects = s3.list_objects_v2()
    .bucket("my-bucket")
    .into_paginator()
    .send()
    .try_collect()
    .await?
    .into_iter()
    .flat_map(|o| o.contents.unwrap_or_default())
    .collect::<Vec<_>>();
```

Sometimes, you want to have more control over paging and not pull everything into memory all at once. The following example iterates over objects in an Amazon S3 bucket until there are no more.

```
let config = aws_config::defaults(BehaviorVersion::latest())
    .load()
    .await;

let s3 = aws_sdk_s3::Client::new(&config);

let mut paginator = s3.list_objects_v2()
    .bucket("my-bucket")
    .into_paginator()
    // customize the page size (max results per/response)
    .page_size(10)
    .send();

println!("Objects in bucket:");

while let Some(result) = paginator.next().await {
    let resp = result?;
    for obj in resp.contents() {
        println!("\t{:?}", obj);
    }
}
```

# Adding unit testing to your AWS SDK for Rust application
<a name="testing"></a>

While there are many ways you can implement unit testing in your AWS SDK for Rust project, there are a few that we recommend:
+ [Unit testing using `mockall`](testing-automock.md) – Use `automock` from the `mockall` crate to automatically generate and execute your tests.
+ [Static replay](testing-replay.md) – Use the AWS Smithy runtime's `StaticReplayClient` to create a fake HTTP client that can be used instead of the standard HTTP client that is normally used by AWS services. This client returns the HTTP responses that you specify rather than communicating with the service over the network, so that tests get known data for testing purposes.
+ [Unit testing using `aws-smithy-mocks`](testing-smithy-mocks.md) – Use `mock` and `mock_client` from the `aws-smithy-mocks` crate to mock AWS SDK client responses and to create mock rules that define how the SDK should respond to specific requests.

# Automatically generate mocks using `mockall` in the AWS SDK for Rust
<a name="testing-automock"></a>

The AWS SDK for Rust provides multiple approaches for testing your code that interacts with AWS services. You can automatically generate the majority of the mock implementations that your tests need by using the popular `[automock](https://docs.rs/mockall/latest/mockall/attr.automock.html)` from the `[mockall](https://docs.rs/mockall/latest/mockall)` crate .

This example tests a custom method called `determine_prefix_file_size()`. This method calls a custom `list_objects()` wrapper method that calls Amazon S3. By mocking `list_objects()`, the `determine_prefix_file_size()` method can be tested without actually contacting Amazon S3. 

1. In a command prompt for your project directory, add the `[mockall](https://docs.rs/mockall/latest/mockall)` crate as a dependency:

   ```
   $ cargo add --dev mockall
   ```

   Using the `--dev` [option](https://doc.rust-lang.org/cargo/commands/cargo-add.html) adds the crate to the `[dev-dependencies]` section of your `Cargo.toml` file. As a [development dependency](https://doc.rust-lang.org/cargo/reference/specifying-dependencies.html#development-dependencies), it is not compiled and included into your final binary that is used for production code.

   This example code also use Amazon Simple Storage Service as the example AWS service.

   ```
   $ cargo add aws-sdk-s3
   ```

   This adds the crate to the `[dependencies]` section of your `Cargo.toml` file.

1. Include the `automock` module from the `mockall` crate. 

   Also include any other libraries related to the AWS service that you are testing, in this case, Amazon S3.

   ```
   use aws_sdk_s3 as s3;
   #[allow(unused_imports)]
   use mockall::automock;
   
   use s3::operation::list_objects_v2::{ListObjectsV2Error, ListObjectsV2Output};
   ```

1. Next, add code that determines which of two implementation of the application's Amazon S3 wrapper structure to use. 
   + The real one written to access Amazon S3 over the network.
   + The mock implementation generated by `mockall`.

   In this example, the one that's selected is given the name `S3`. The selection is conditional based on the `test` attribute:

   ```
   #[cfg(test)]
   pub use MockS3Impl as S3;
   #[cfg(not(test))]
   pub use S3Impl as S3;
   ```

1. The `S3Impl` struct is the implementation of the Amazon S3 wrapper structure that actually sends requests to AWS.
   + When testing is enabled, this code isn't used because the request is sent to the mock and not AWS. The `dead_code` attribute tells the linter not to report a problem if the `S3Impl` type isn't used.
   +  The conditional `#[cfg_attr(test, automock)]` indicates that when testing is enabled, the `automock` attribute should be set. This tells `mockall` to generate a mock of `S3Impl` that will be named `MockS3Impl`.
   + In this example, the `list_objects()` method is the call you want mocked. `automock` will automatically create an `expect_list_objects()` method for you. 

   ```
   #[allow(dead_code)]
   pub struct S3Impl {
       inner: s3::Client,
   }
   
   #[cfg_attr(test, automock)]
   impl S3Impl {
       #[allow(dead_code)]
       pub fn new(inner: s3::Client) -> Self {
           Self { inner }
       }
   
       #[allow(dead_code)]
       pub async fn list_objects(
           &self,
           bucket: &str,
           prefix: &str,
           continuation_token: Option<String>,
       ) -> Result<ListObjectsV2Output, s3::error::SdkError<ListObjectsV2Error>> {
           self.inner
               .list_objects_v2()
               .bucket(bucket)
               .prefix(prefix)
               .set_continuation_token(continuation_token)
               .send()
               .await
       }
   }
   ```

1. Create the test functions in a module named `test`.
   + The conditional `#[cfg(test)]` indicates that `mockall` should build the test module if the `test` attribute is `true`.

   ```
   #[cfg(test)]
   mod test {
       use super::*;
       use mockall::predicate::eq;
   
       #[tokio::test]
       async fn test_single_page() {
           let mut mock = MockS3Impl::default();
           mock.expect_list_objects()
               .with(eq("test-bucket"), eq("test-prefix"), eq(None))
               .return_once(|_, _, _| {
                   Ok(ListObjectsV2Output::builder()
                       .set_contents(Some(vec![
                           // Mock content for ListObjectsV2 response
                           s3::types::Object::builder().size(5).build(),
                           s3::types::Object::builder().size(2).build(),
                       ]))
                       .build())
               });
   
           // Run the code we want to test with it
           let size = determine_prefix_file_size(mock, "test-bucket", "test-prefix")
               .await
               .unwrap();
   
           // Verify we got the correct total size back
           assert_eq!(7, size);
       }
   
       #[tokio::test]
       async fn test_multiple_pages() {
           // Create the Mock instance with two pages of objects now
           let mut mock = MockS3Impl::default();
           mock.expect_list_objects()
               .with(eq("test-bucket"), eq("test-prefix"), eq(None))
               .return_once(|_, _, _| {
                   Ok(ListObjectsV2Output::builder()
                       .set_contents(Some(vec![
                           // Mock content for ListObjectsV2 response
                           s3::types::Object::builder().size(5).build(),
                           s3::types::Object::builder().size(2).build(),
                       ]))
                       .set_next_continuation_token(Some("next".to_string()))
                       .build())
               });
           mock.expect_list_objects()
               .with(
                   eq("test-bucket"),
                   eq("test-prefix"),
                   eq(Some("next".to_string())),
               )
               .return_once(|_, _, _| {
                   Ok(ListObjectsV2Output::builder()
                       .set_contents(Some(vec![
                           // Mock content for ListObjectsV2 response
                           s3::types::Object::builder().size(3).build(),
                           s3::types::Object::builder().size(9).build(),
                       ]))
                       .build())
               });
   
           // Run the code we want to test with it
           let size = determine_prefix_file_size(mock, "test-bucket", "test-prefix")
               .await
               .unwrap();
   
           assert_eq!(19, size);
       }
   }
   ```
   + Each test uses `let mut mock = MockS3Impl::default();` to create a `mock` instance of `MockS3Impl`. 
   + It uses the mock's `expect_list_objects()` method (which was created automatically by `automock`) to set the expected result for when the `list_objects()` method is used elsewhere in the code.
   + After the expectations are established, it uses these to test the function by calling `determine_prefix_file_size()`. The returned value is checked to confirm that it's correct, using an assertion.

1. The `determine_prefix_file_size()` function uses the Amazon S3 wrapper to get the size of the prefix file:

   ```
   #[allow(dead_code)]
   pub async fn determine_prefix_file_size(
       // Now we take a reference to our trait object instead of the S3 client
       // s3_list: ListObjectsService,
       s3_list: S3,
       bucket: &str,
       prefix: &str,
   ) -> Result<usize, s3::Error> {
       let mut next_token: Option<String> = None;
       let mut total_size_bytes = 0;
       loop {
           let result = s3_list
               .list_objects(bucket, prefix, next_token.take())
               .await?;
   
           // Add up the file sizes we got back
           for object in result.contents() {
               total_size_bytes += object.size().unwrap_or(0) as usize;
           }
   
           // Handle pagination, and break the loop if there are no more pages
           next_token = result.next_continuation_token.clone();
           if next_token.is_none() {
               break;
           }
       }
       Ok(total_size_bytes)
   }
   ```

The type `S3` is used to call the wrapped SDK for Rust functions to support both `S3Impl` and `MockS3Impl` when making HTTP requests. The mock automatically generated by `mockall` reports any test failures when testing is enabled.

You can [view the complete code for these examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/rustv1/examples/testing) on GitHub.

# Simulate HTTP traffic using static replay in the AWS SDK for Rust
<a name="testing-replay"></a>

The AWS SDK for Rust provides multiple approaches for testing your code that interacts with AWS services. This topic describes how to use the `StaticReplayClient` to create a fake HTTP client that can be used instead of the standard HTTP client that is normally used by AWS services. This client returns the HTTP responses that you specify rather than communicating with the service over the network, so that tests get known data for testing purposes.

The `aws-smithy-http-client` crate includes a test utility class called [https://docs.rs/aws-smithy-http-client/latest/aws_smithy_http_client/test_util/struct.StaticReplayClient.html](https://docs.rs/aws-smithy-http-client/latest/aws_smithy_http_client/test_util/struct.StaticReplayClient.html). This HTTP client class can be specified instead of the default HTTP client when creating an AWS service object.

When initializing the `StaticReplayClient`, you provide a list of HTTP request and response pairs as `ReplayEvent` objects. While the test is running, each HTTP request is recorded and the client returns the next HTTP response found in the next `ReplayEvent` in the event list as the HTTP client's response. This lets the test run using known data and without a network connection.

## Using static replay
<a name="testing-replay-steps"></a>

To use static replay, you don't need to use a wrapper. Instead, determine what the actual network traffic should look like for the data your test will use, and provide that traffic data to the `StaticReplayClient` to use each time the SDK issues a request from the AWS service client.

**Note**  
There are several ways to collect the expected network traffic, including the AWS CLI and many network traffic analyzers and packet sniffer tools.
+ Create a list of `ReplayEvent` objects that specify the expected HTTP requests and the responses that should be returned for them.
+ Create a `StaticReplayClient` using the HTTP transaction list created in the previous step.
+ Create a configuration object for the AWS client, specifying the `StaticReplayClient` as the `Config` object's `http_client`.
+ Create the AWS service client object, using the configuration created in the previous step.
+ Perform the operations that you want to test, using the service object that's configured to use the `StaticReplayClient`. Each time the SDK sends an API request to AWS, the next response in the list is used.
**Note**  
The next response in the list is always returned, even if the sent request doesn't match the one in the vector of `ReplayEvent` objects.
+ When all the desired requests have been made, call the `StaticReplayClient.assert_requests_match()` function to verify that the requests sent by the SDK match the ones in the list of `ReplayEvent` objects.

## Example
<a name="testing-replay-example"></a>

Let's look at the tests for the same `determine_prefix_file_size()` function in the previous example, but using static replay instead of mocking.

1. In a command prompt for your project directory, add the [https://crates.io/crates/aws-smithy-http-client](https://crates.io/crates/aws-smithy-http-client) crate as a dependency:

   ```
   $ cargo add --dev aws-smithy-http-client --features test-util
   ```

   Using the `--dev` [option](https://doc.rust-lang.org/cargo/commands/cargo-add.html) adds the crate to the `[dev-dependencies]` section of your `Cargo.toml` file. As a [development dependency](https://doc.rust-lang.org/cargo/reference/specifying-dependencies.html#development-dependencies), it is not compiled and included into your final binary that is used for production code.

   This example code also use Amazon Simple Storage Service as the example AWS service.

   ```
   $ cargo add aws-sdk-s3
   ```

   This adds the crate to the `[dependencies]` section of your `Cargo.toml` file.

1. In your test code module, include both of the types that you'll need.

   ```
   use aws_smithy_http_client::test_util::{ReplayEvent, StaticReplayClient};
   use aws_sdk_s3::primitives::SdkBody;
   ```

1. The test begins by creating the `ReplayEvent` structures representing each of the HTTP transactions that should take place during the test. Each event contains an HTTP request object and an HTTP response object representing the information that the AWS service would normally reply with. These events are passed into a call to `StaticReplayClient::new()`:

   ```
           let page_1 = ReplayEvent::new(
                   http::Request::builder()
                       .method("GET")
                       .uri("https://test-bucket.s3.us-east-1.amazonaws.com/?list-type=2&prefix=test-prefix")
                       .body(SdkBody::empty())
                       .unwrap(),
                   http::Response::builder()
                       .status(200)
                       .body(SdkBody::from(include_str!("./testing/response_multi_1.xml")))
                       .unwrap(),
               );
           let page_2 = ReplayEvent::new(
                   http::Request::builder()
                       .method("GET")
                       .uri("https://test-bucket.s3.us-east-1.amazonaws.com/?list-type=2&prefix=test-prefix&continuation-token=next")
                       .body(SdkBody::empty())
                       .unwrap(),
                   http::Response::builder()
                       .status(200)
                       .body(SdkBody::from(include_str!("./testing/response_multi_2.xml")))
                       .unwrap(),
               );
           let replay_client = StaticReplayClient::new(vec![page_1, page_2]);
   ```

   The result is stored in `replay_client`. This represents an HTTP client that can then be used by the SDK for Rust by specifying it in the client's configuration.

1. To create the Amazon S3 client, call the client class's `from_conf()` function to create the client using a configuration object:

   ```
           let client: s3::Client = s3::Client::from_conf(
               s3::Config::builder()
                   .behavior_version(BehaviorVersion::latest())
                   .credentials_provider(make_s3_test_credentials())
                   .region(s3::config::Region::new("us-east-1"))
                   .http_client(replay_client.clone())
                   .build(),
           );
   ```

   The configuration object is specified using the builder's `http_client()` method, and the credentials are specified using the `credentials_provider()` method. The credentials are created using a function called `make_s3_test_credentials()`, which returns a fake credentials structure:

   ```
   fn make_s3_test_credentials() -> s3::config::Credentials {
       s3::config::Credentials::new(
           "ATESTCLIENT",
           "astestsecretkey",
           Some("atestsessiontoken".to_string()),
           None,
           "",
       )
   }
   ```

   These credentials don't need to be valid because they won't actually be sent to AWS.

1. Run the test by calling the function that needs testing. In this example, that function's name is `determine_prefix_file_size()`. Its first parameter is the Amazon S3 client object to use for its requests. Therefore, specify the client created using the `StaticReplayClient` so requests are handled by that rather than going out over the network:

   ```
           let size = determine_prefix_file_size(client, "test-bucket", "test-prefix")
               .await
               .unwrap();
   
           assert_eq!(19, size);
   
           replay_client.assert_requests_match(&[]);
   ```

   When the call to `determine_prefix_file_size()` is finished, an assert is used to confirm that the returned value matches the expected value. Then, the `StaticReplayClient` method `assert_requests_match()` function is called. This function scans the recorded HTTP requests and confirms that they all match the ones specified in the array of `ReplayEvent` objects provided when creating the replay client.

You can [view the complete code for these examples](https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/rustv1/examples/testing) on GitHub.

# Unit testing with `aws-smithy-mocks` in the AWS SDK for Rust
<a name="testing-smithy-mocks"></a>

The AWS SDK for Rust provides multiple approaches for testing your code that interacts with AWS services. This topic describes how to use the [https://docs.rs/aws-smithy-mocks/latest/aws_smithy_mocks/](https://docs.rs/aws-smithy-mocks/latest/aws_smithy_mocks/) crate, which offers a simple yet powerful way to mock AWS SDK client responses for testing purposes.

## Overview
<a name="overview-smithy-mock"></a>

When writing tests for code that uses AWS services, you often want to avoid making actual network calls. The `aws-smithy-mocks` crate provides a solution by allowing you to:
+ Create mock rules that define how the SDK should respond to specific requests.
+ Return different types of responses (success, error, HTTP responses).
+ Match requests based on their properties.
+ Define sequences of responses for testing retry behavior.
+ Verify that your rules were used as expected.

## Adding the dependency
<a name="dependency-smithy-mock"></a>

In a command prompt for your project directory, add the [https://crates.io/crates/aws-smithy-mocks](https://crates.io/crates/aws-smithy-mocks) crate as a dependency:

```
$ cargo add --dev aws-smithy-mocks
```

Using the `--dev` [option](https://doc.rust-lang.org/cargo/commands/cargo-add.html) adds the crate to the `[dev-dependencies]` section of your `Cargo.toml` file. As a [development dependency](https://doc.rust-lang.org/cargo/reference/specifying-dependencies.html#development-dependencies), it is not compiled and included into your final binary that is used for production code.

This example code also use Amazon Simple Storage Service as the example AWS service, and requires feature `test-util`.

```
$ cargo add aws-sdk-s3 --features test-util
```

This adds the crate to the `[dependencies]` section of your `Cargo.toml` file.

## Basic usage
<a name="basic-smithy-mocks"></a>

 Here's a simple example of how to use `aws-smithy-mocks` to test code that interacts with Amazon Simple Storage Service (Amazon S3):

```
use aws_sdk_s3::operation::get_object::GetObjectOutput;
use aws_sdk_s3::primitives::ByteStream;
use aws_smithy_mocks::{mock, mock_client};

#[tokio::test]
async fn test_s3_get_object() {
    // Create a rule that returns a successful response
    let get_object_rule = mock!(aws_sdk_s3::Client::get_object)
        .then_output(|| {
            GetObjectOutput::builder()
                .body(ByteStream::from_static(b"test-content"))
                .build()
        });

    // Create a mocked client with the rule
    let s3 = mock_client!(aws_sdk_s3, [&get_object_rule]);

    // Use the client as you would normally
    let result = s3
        .get_object()
        .bucket("test-bucket")
        .key("test-key")
        .send()
        .await
        .expect("success response");

    // Verify the response
    let data = result.body.collect().await.expect("successful read").to_vec();
    assert_eq!(data, b"test-content");

    // Verify the rule was used
    assert_eq!(get_object_rule.num_calls(), 1);
}
```

## Creating mock rules
<a name="creating-rules-smithy-mocks"></a>

Rules are created using the `mock!` macro, which takes a client operation as an argument. You can then configure how the rule should behave. 

### Matching Requests
<a name="matching-requests-smithy-mocks"></a>

 You can make rules more specific by matching on request properties:

```
let rule = mock!(Client::get_object)
    .match_requests(|req| req.bucket() == Some("test-bucket") && req.key() == Some("test-key"))
    .then_output(|| {
        GetObjectOutput::builder()
            .body(ByteStream::from_static(b"test-content"))
            .build()
    });
```

### Different Response Types
<a name="diff-response-smithy-mocks"></a>

You can return different types of responses:

```
// Return a successful response
let success_rule = mock!(Client::get_object)
    .then_output(|| GetObjectOutput::builder().build());

// Return an error
let error_rule = mock!(Client::get_object)
    .then_error(|| GetObjectError::NoSuchKey(NoSuchKey::builder().build()));

// Return a specific HTTP response
let http_rule = mock!(Client::get_object)
    .then_http_response(|| {
        HttpResponse::new(
            StatusCode::try_from(503).unwrap(),
            SdkBody::from("service unavailable")
        )
    });
```

## Testing retry behavior
<a name="testing-retry-behavior-smithy-mocks"></a>

One of the most powerful features of `aws-smithy-mocks` is the ability to test retry behavior by defining sequences of responses:

```
// Create a rule that returns 503 twice, then succeeds
let retry_rule = mock!(aws_sdk_s3::Client::get_object)
    .sequence()
    .http_status(503, None)                          // First call returns 503
    .http_status(503, None)                          // Second call returns 503
    .output(|| GetObjectOutput::builder().build())   // Third call succeeds
    .build();

// With repetition using times()
let retry_rule = mock!(Client::get_object)
    .sequence()
    .http_status(503, None)
    .times(2)                                        // First two calls return 503
    .output(|| GetObjectOutput::builder().build())   // Third call succeeds
    .build();
```

## Rule modes
<a name="rule-modes-smithy-mocks"></a>

You can control how rules are matched and applied using rule modes:

```
// Sequential mode: Rules are tried in order, and when a rule is exhausted, the next rule is used
let client = mock_client!(aws_sdk_s3, RuleMode::Sequential, [&rule1, &rule2]);

// MatchAny mode: The first matching rule is used, regardless of order
let client = mock_client!(aws_sdk_s3, RuleMode::MatchAny, [&rule1, &rule2]);
```

## Example: Testing retry behavior
<a name="example-retry-smithy-mocks"></a>

Here's a more complete example showing how to test retry behavior:

```
use aws_sdk_s3::operation::get_object::GetObjectOutput;
use aws_sdk_s3::config::RetryConfig;
use aws_sdk_s3::primitives::ByteStream;
use aws_smithy_mocks::{mock, mock_client, RuleMode};

#[tokio::test]
async fn test_retry_behavior() {
    // Create a rule that returns 503 twice, then succeeds
    let retry_rule = mock!(aws_sdk_s3::Client::get_object)
        .sequence()
        .http_status(503, None)
        .times(2)
        .output(|| GetObjectOutput::builder()
            .body(ByteStream::from_static(b"success"))
            .build())
        .build();

    // Create a mocked client with the rule and custom retry configuration
    let s3 = mock_client!(
        aws_sdk_s3,
        RuleMode::Sequential,
        [&retry_rule],
        |client_builder| {
            client_builder.retry_config(RetryConfig::standard().with_max_attempts(3))
        }
    );

    // This should succeed after two retries
    let result = s3
        .get_object()
        .bucket("test-bucket")
        .key("test-key")
        .send()
        .await
        .expect("success after retries");

    // Verify the response
    let data = result.body.collect().await.expect("successful read").to_vec();
    assert_eq!(data, b"success");

    // Verify all responses were used
    assert_eq!(retry_rule.num_calls(), 3);
}
```

## Example: Different responses based on request parameters
<a name="example-request-param-smithy-mocks"></a>

You can also create rules that return different responses based on request parameters:

```
use aws_sdk_s3::operation::get_object::{GetObjectOutput, GetObjectError};
use aws_sdk_s3::types::error::NoSuchKey;
use aws_sdk_s3::Client;
use aws_sdk_s3::primitives::ByteStream;
use aws_smithy_mocks::{mock, mock_client, RuleMode};

#[tokio::test]
async fn test_different_responses() {
    // Create rules for different request parameters
    let exists_rule = mock!(Client::get_object)
        .match_requests(|req| req.bucket() == Some("test-bucket") && req.key() == Some("exists"))
        .sequence()
        .output(|| GetObjectOutput::builder()
            .body(ByteStream::from_static(b"found"))
            .build())
        .build();

    let not_exists_rule = mock!(Client::get_object)
        .match_requests(|req| req.bucket() == Some("test-bucket") && req.key() == Some("not-exists"))
        .sequence()
        .error(|| GetObjectError::NoSuchKey(NoSuchKey::builder().build()))
        .build();

    // Create a mocked client with the rules in MatchAny mode
    let s3 = mock_client!(aws_sdk_s3, RuleMode::MatchAny, [&exists_rule, &not_exists_rule]);

    // Test the "exists" case
    let result1 = s3
        .get_object()
        .bucket("test-bucket")
        .key("exists")
        .send()
        .await
        .expect("object exists");

    let data = result1.body.collect().await.expect("successful read").to_vec();
    assert_eq!(data, b"found");

    // Test the "not-exists" case
    let result2 = s3
        .get_object()
        .bucket("test-bucket")
        .key("not-exists")
        .send()
        .await;

    assert!(result2.is_err());
    assert!(matches!(result2.unwrap_err().into_service_error(),
                    GetObjectError::NoSuchKey(_)));
}
```

## Best practices
<a name="best-practices-smithy-mocks"></a>

When using `aws-smithy-mocks` for testing:

1.  Match specific requests: Use `match_requests()` to ensure your rules only apply to the intended requests, in particular with `RuleMode:::MatchAny`.

1.  Verify rule usage: Check `rule.num_calls()` to ensure your rules were actually used.

1.  Test error handling: Create rules that return errors to test how your code handles failures.

1.  Test retry logic: Use response sequences to verify that your code correctly handles any custom retry classifiers or other retry behavior.

1. Keep tests focused: Create separate tests for different scenarios rather than trying to cover everything in one test.

# Using waiters in the AWS SDK for Rust
<a name="waiters"></a>

 Waiters are a client-side abstraction used to poll a resource until a desired state is reached, or until it is determined that the resource will not enter the desired state. This is a common task when working with services that are eventually consistent, like Amazon Simple Storage Service, or services that asynchronously create resources, like Amazon Elastic Compute Cloud. Writing logic to continuously poll the status of a resource can be cumbersome and error-prone. The goal of waiters is to move this responsibility out of customer code and into the AWS SDK for Rust, which has in-depth knowledge of the timing aspects for the AWS operation. 

AWS services that provide support for waiters include a `<service>::waiters` module. 
+ The `<service>::client::Waiters` trait provides waiter methods for the client. The methods are implemented for the `Client` struct. All waiter methods follow a standard naming convention of `wait_until_<Condition>` 
  + For Amazon S3, this trait is [https://docs.rs/aws-sdk-s3/latest/aws_sdk_s3/client/trait.Waiters.html](https://docs.rs/aws-sdk-s3/latest/aws_sdk_s3/client/trait.Waiters.html). 

The following example uses Amazon S3. However, the concepts are the same for any AWS service that has one or more waiters defined. 

The following code example shows using a waiter function instead of writing polling logic to wait for a bucket to exist after being created. 

```
use std::time::Duration;
use aws_config::BehaviorVersion;
// Import Waiters trait to get `wait_until_<Condition>` methods on Client.
use aws_sdk_s3::client::Waiters;


let config = aws_config::defaults(BehaviorVersion::latest())
    .load()
    .await;
    
let s3 = aws_sdk_s3::Client::new(&config);

// This initiates creating an S3 bucket and potentially returns before the bucket exists.
s3.create_bucket()
    .bucket("my-bucket")
    .send()
    .await?;

// When this function returns, the bucket either exists or an error is propagated.
s3.wait_until_bucket_exists()
    .bucket("my-bucket")
    .wait(Duration::from_secs(5))
    .await?;

// The bucket now exists.
```

**Note**  
 Each wait method returns a `Result<FinalPoll<...>, WaiterError<...>> ` that can be used to get at the final response from reaching the desired condition or an error. See [FinalPoll](https://docs.rs/aws-smithy-runtime-api/latest/aws_smithy_runtime_api/client/waiters/struct.FinalPoll.html) and [WaiterError](https://docs.rs/aws-smithy-runtime-api/latest/aws_smithy_runtime_api/client/waiters/error/enum.WaiterError.html) in the Rust API documentation for details.