

# Tracking job status and completion reports


With S3 Batch Operations, you can view and update job status, add notifications and logging, track job failures, and generate completion reports. 

**Topics**
+ [

## Job statuses
](#batch-ops-job-status-table)
+ [

## Updating job status
](#updating-job-statuses)
+ [

## Notifications and logging
](#batch-ops-notifications)
+ [

## Tracking job failures
](#batch-ops-job-status-failure)
+ [

## Completion reports
](#batch-ops-completion-report)
+ [

# Examples: Tracking an S3 Batch Operations job in Amazon EventBridge through AWS CloudTrail
](batch-ops-examples-event-bridge-cloud-trail.md)
+ [

# Examples: S3 Batch Operations completion reports
](batch-ops-examples-reports.md)

## Job statuses


After you create and run a job, it progresses through a series of statuses. The following table describes the statuses and possible transitions between them. 


| Status | Description | Transitions | 
| --- | --- | --- | 
| `New` | A job begins in the `New` state when you create it. | A job automatically moves to the `Preparing` state when Amazon S3 begins processing the manifest object. | 
| `Preparing` | Amazon S3 is processing the manifest object and other job parameters to set up and run the job. | A job automatically moves to the `Ready` state after Amazon S3 finishes processing the manifest and other parameters. The job is then ready to begin running the specified operation on the objects listed in the manifest.If the job requires confirmation before running, such as when you create a job using the Amazon S3 console, then the job transitions from `Preparing` to `Suspended`. It remains in the `Suspended` state until you confirm that you want to run it. | 
| `Suspended` | The job requires confirmation, but you haven't yet confirmed that you want to run it. Only jobs that you create using the Amazon S3 console require confirmation. A job that's created using the console enters the `Suspended` state immediately after `Preparing`. After you confirm that you want to run the job and the job becomes `Ready`, it never returns to the `Suspended` state. | After you confirm that you want to run the job, its status changes to `Ready`. | 
| `Ready` | Amazon S3 is ready to begin running the requested object operations. | A job automatically moves to `Active` when Amazon S3 begins to run it. The amount of time that a job remains in the `Ready` state depends on whether you have higher-priority jobs running already and how long those jobs take to complete. | 
| `Active` | Amazon S3 is performing the requested operation on the objects listed in the manifest. While a job is `Active`, you can monitor its progress using the Amazon S3 console or the `DescribeJob` operation through the REST API, AWS CLI, or AWS SDKs. | A job moves out of the `Active` state when the job is no longer running operations on objects. This behavior can happen automatically, such as when a job completes successfully or fails. Or this behavior can occur as a result of user actions, such as canceling a job. The state that the job moves to depends on the reason for the transition. | 
| `Pausing` | The job is transitioning to `Paused` from another state. | A job automatically moves to `Paused` when the `Pausing` stage is finished. | 
| `Paused` | A job can become `Paused` if you submit another job with a higher priority while the current job is running. | A `Paused` job automatically returns to `Active` after any higher-priority jobs that are blocking the job's' execution complete, fail, or are suspended. | 
| `Completing` | The job is transitioning to `Complete` from another state. | A job automatically moves to `Complete` when the `Completing` stage is finished. | 
| `Complete` | The job has finished performing the requested operation on all objects in the manifest. The operation might have succeeded or failed for every object. If you configured the job to generate a completion report, the report is available as soon as the job is `Complete`. | `Complete` is a terminal state. Once a job reaches `Complete`, it doesn't transition to any other state. | 
| `Cancelling` | The job is transitioning to the `Cancelled` state. | A job automatically moves to `Cancelled` when the `Cancelling` stage is finished. | 
| `Cancelled` | You requested that the job be canceled, and S3 Batch Operations has successfully canceled the job. The job won't submit any new requests to Amazon S3. | `Cancelled` is a terminal state. After a job reaches `Cancelled`, the job won't transition to any other state. | 
| `Failing` | The job is transitioning to the `Failed` state. | A job automatically moves to `Failed` once the `Failing` stage is finished. | 
| `Failed` | The job has failed and is no longer running. For more information about job failures, see [Tracking job failures](#batch-ops-job-status-failure). | `Failed` is a terminal state. After a job reaches `Failed`, it won't transition to any other state. | 

## Updating job status


The following AWS CLI and AWS SDK for Java examples update the status of a Batch Operations job. For more information about using the Amazon S3 console to manage Batch Operations jobs, see [Using the Amazon S3 console to manage your S3 Batch Operations jobs](batch-ops-managing-jobs.md#batch-ops-manage-console).

### Using the AWS CLI


To use the following example commands, replace the *`user input placeholders`* with your own information. 
+ If you didn't specify the `--no-confirmation-required` parameter in your `create-job` command, the job remains in a suspended state until you confirm the job by setting its status to `Ready`. Amazon S3 then makes the job eligible for execution.

  ```
  aws s3control update-job-status \
      --region us-west-2 \
      --account-id 123456789012 \
      --job-id 00e123a4-c0d8-41f4-a0eb-b46f9ba5b07c \
      --requested-job-status 'Ready'
  ```
+ Cancel the job by setting the job status to `Cancelled`.

  ```
  aws s3control update-job-status \
       --region us-west-2 \
       --account-id 123456789012 \
       --job-id 00e123a4-c0d8-41f4-a0eb-b46f9ba5b07c \
       --status-update-reason "No longer needed" \
       --requested-job-status Cancelled
  ```

### Using the AWS SDK for Java


For examples of how to update job status with the AWS SDK for Java, see [Update the status of a batch job](https://docs.aws.amazon.com/AmazonS3/latest/API/s3-control_example_s3-control_UpdateJobStatus_section.html) in the *Amazon S3 API Reference*.

## Notifications and logging


In addition to requesting completion reports, you can also capture, review, and audit Batch Operations activity by using AWS CloudTrail. Because Batch Operations uses existing Amazon S3 API operations to perform tasks, those tasks also emit the same events that they would if you called them directly. Therefore, you can track and record the progress of your job and all of its tasks by using the same notification, logging, and auditing tools and processes that you already use with Amazon S3. For more information, see the examples in the following sections.

**Note**  
Batch Operations generates both management and data events in CloudTrail during job execution. The volume of these events scale with the number of keys in each job's manifest. For more information, see the [CloudTrail pricing](https://aws.amazon.com/cloudtrail/pricing/) page, which includes examples of how pricing changes depending on the number of trails that you have configured in your account. To learn how to configure and log events to fit your needs, see [Create your first trail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-tutorial.html#tutorial-step2) in the *AWS CloudTrail User Guide*.

For more information about Amazon S3 events, see [Amazon S3 Event Notifications](EventNotifications.md). 

## Tracking job failures


If an S3 Batch Operations job encounters a problem that prevents it from running successfully, such as not being able to read the specified manifest, the job fails. When a job fails, it generates one or more failure codes or failure reasons. S3 Batch Operations stores the failure codes and reasons with the job so that you can view them by requesting the job's details. If you requested a completion report for the job, the failure codes and reasons also appear there.

To prevent jobs from running a large number of unsuccessful operations, Amazon S3 imposes a task-failure threshold on every Batch Operations job. When a job has run at least 1,000 tasks, Amazon S3 monitors the task-failure rate. At any point, if the failure rate (the number of tasks that have failed as a proportion of the total number of tasks that have run) exceeds 50 percent, the job fails. If your job fails because it exceeded the task-failure threshold, you can identify the cause of the failures. For example, you might have accidentally included some objects in the manifest that don't exist in the specified bucket. After fixing the errors, you can resubmit the job.

**Note**  
S3 Batch Operations operates asynchronously and the tasks don't necessarily run in the order that the objects are listed in the manifest. Therefore, you can't use the manifest ordering to determine which objects' tasks succeeded and which ones failed. Instead, you can examine the job's completion report (if you requested one) or view your AWS CloudTrail event logs to help determine the source of the failures.

## Completion reports


When you create a job, you can request a completion report. As long as S3 Batch Operations successfully invokes at least one task, Amazon S3 generates a completion report after the job finishes running tasks, fails, or is canceled. You can configure the completion report to include all tasks or only failed tasks. 

The completion report includes the job configuration, status, and information for each task, including the object key and version, status, error codes, and descriptions of any errors. Completion reports provide an easy way to view the results of your tasks in a consolidated format with no additional setup required. Completion reports are encrypted by using server-side encryption with Amazon S3 managed keys (SSE-S3). For an example of a completion report, see [Examples: S3 Batch Operations completion reports](batch-ops-examples-reports.md). 

If you don't configure a completion report, you can still monitor and audit your job and its tasks by using CloudTrail and Amazon CloudWatch. For more information, see the following topics:

**Topics**
+ [

## Job statuses
](#batch-ops-job-status-table)
+ [

## Updating job status
](#updating-job-statuses)
+ [

## Notifications and logging
](#batch-ops-notifications)
+ [

## Tracking job failures
](#batch-ops-job-status-failure)
+ [

## Completion reports
](#batch-ops-completion-report)
+ [

# Examples: Tracking an S3 Batch Operations job in Amazon EventBridge through AWS CloudTrail
](batch-ops-examples-event-bridge-cloud-trail.md)
+ [

# Examples: S3 Batch Operations completion reports
](batch-ops-examples-reports.md)

# Examples: Tracking an S3 Batch Operations job in Amazon EventBridge through AWS CloudTrail
Examples of tracking using Amazon EventBridge

Amazon S3 Batch Operations job activity is recorded as events in AWS CloudTrail. You can create a custom rule in Amazon EventBridge and send these events to the target notification resource of your choice, such as Amazon Simple Notification Service (Amazon SNS). 

**Note**  
Amazon EventBridge is the preferred way to manage your events. Amazon CloudWatch Events and EventBridge are the same underlying service and API, but EventBridge provides more features. Changes that you make in either CloudWatch or EventBridge appear in each console. For more information, see the *[Amazon EventBridge User Guide](https://docs.aws.amazon.com/eventbridge/latest/userguide/)*.

**Topics**
+ [

## S3 Batch Operations events recorded in CloudTrail
](#batch-ops-examples-cloud-trail-events)
+ [

## EventBridge rule for tracking S3 Batch Operations job events
](#batch-ops-examples-event-bridge)

## S3 Batch Operations events recorded in CloudTrail
Events recorded in CloudTrail



When a Batch Operations job is created, it is recorded as a `JobCreated` event in CloudTrail. As the job runs, it changes state during processing, and other `JobStatusChanged` events are recorded in CloudTrail. You can view these events on the [CloudTrail console](https://console.aws.amazon.com/cloudtrail). For more information about CloudTrail, see the [https://docs.aws.amazon.com/awscloudtrail/latest/userguide/how-cloudtrail-works.html](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/how-cloudtrail-works.html).

**Note**  
Only S3 Batch Operations job `status-change` events are recorded in CloudTrail.

**Example — S3 Batch Operations job completion event recorded by CloudTrail**  

```
{
    "eventVersion": "1.05",
    "userIdentity": {
        "accountId": "123456789012",
        "invokedBy": "s3.amazonaws.com"
    },
    "eventTime": "2020-02-05T18:25:30Z",
    "eventSource": "s3.amazonaws.com",
    "eventName": "JobStatusChanged",
    "awsRegion": "us-west-2",
    "sourceIPAddress": "s3.amazonaws.com",
    "userAgent": "s3.amazonaws.com",
    "requestParameters": null,
    "responseElements": null,
    "eventID": "f907577b-bf3d-4c53-b9ed-8a83a118a554",
    "readOnly": false,
    "eventType": "AwsServiceEvent",
    "recipientAccountId": "123412341234",
    "serviceEventDetails": {
        "jobId": "d6e58ec4-897a-4b6d-975f-10d7f0fb63ce",
        "jobArn": "arn:aws:s3:us-west-2:181572960644:job/d6e58ec4-897a-4b6d-975f-10d7f0fb63ce",
        "status": "Complete",
        "jobEventId": "b268784cf0a66749f1a05bce259804f5",
        "failureCodes": [],
        "statusChangeReason": []
    }
}
```

## EventBridge rule for tracking S3 Batch Operations job events
Creating Amazon EventBridge rules for Batch Operations

The following example shows how to create a rule in Amazon EventBridge to capture S3 Batch Operations events recorded by AWS CloudTrail to a target of your choice.

To do this, you create a rule by following all the steps in [Creating EventBridge rules that react to events](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-create-rule.html). You paste the following S3 Batch Operations custom event pattern policy where applicable, and choose the target service of your choice.

**S3 Batch Operations custom event pattern policy**

```
{
    "source": [
        "aws.s3"
    ],
    "detail-type": [
        "AWS Service Event via CloudTrail"
    ],
    "detail": {
        "eventSource": [
            "s3.amazonaws.com"
        ],
        "eventName": [
            "JobCreated",
            "JobStatusChanged"
        ]
    }
}
```

 The following examples are two Batch Operations events that were sent to Amazon Simple Queue Service (Amazon SQS) from an EventBridge event rule. A Batch Operations job goes through many different states while processing (`New`, `Preparing`, `Active`, etc.), so you can expect to receive several messages for each job.

**Example — JobCreated sample event**  

```
{
    "version": "0",
    "id": "51dc8145-541c-5518-2349-56d7dffdf2d8",
    "detail-type": "AWS Service Event via CloudTrail",
    "source": "aws.s3",
    "account": "123456789012",
    "time": "2020-02-27T15:25:49Z",
    "region": "us-east-1",
    "resources": [],
    "detail": {
        "eventVersion": "1.05",
        "userIdentity": {
            "accountId": "11112223334444",
            "invokedBy": "s3.amazonaws.com"
        },
        "eventTime": "2020-02-27T15:25:49Z",
        "eventSource": "s3.amazonaws.com",
        "eventName": "JobCreated",
        "awsRegion": "us-east-1",
        "sourceIPAddress": "s3.amazonaws.com",
        "userAgent": "s3.amazonaws.com",
        "eventID": "7c38220f-f80b-4239-8b78-2ed867b7d3fa",
        "readOnly": false,
        "eventType": "AwsServiceEvent",
        "serviceEventDetails": {
            "jobId": "e849b567-5232-44be-9a0c-40988f14e80c",
            "jobArn": "arn:aws:s3:us-east-1:181572960644:job/e849b567-5232-44be-9a0c-40988f14e80c",
            "status": "New",
            "jobEventId": "f177ff24f1f097b69768e327038f30ac",
            "failureCodes": [],
            "statusChangeReason": []
        }
    }
}
```

**Example — JobStatusChanged job completion event**  

```
{
  "version": "0",
  "id": "c8791abf-2af8-c754-0435-fd869ce25233",
  "detail-type": "AWS Service Event via CloudTrail",
  "source": "aws.s3",
  "account": "123456789012",
  "time": "2020-02-27T15:26:42Z",
  "region": "us-east-1",
  "resources": [],
  "detail": {
    "eventVersion": "1.05",
    "userIdentity": {
      "accountId": "1111222233334444",
      "invokedBy": "s3.amazonaws.com"
    },
    "eventTime": "2020-02-27T15:26:42Z",
    "eventSource": "s3.amazonaws.com",
    "eventName": "JobStatusChanged",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "s3.amazonaws.com",
    "userAgent": "s3.amazonaws.com",
    "eventID": "0238c1f7-c2b0-440b-8dbd-1ed5e5833afb",
    "readOnly": false,
    "eventType": "AwsServiceEvent",
    "serviceEventDetails": {
      "jobId": "e849b567-5232-44be-9a0c-40988f14e80c",
      "jobArn": "arn:aws:s3:us-east-1:181572960644:job/e849b567-5232-44be-9a0c-40988f14e80c",
      "status": "Complete",
      "jobEventId": "51f5ac17dba408301d56cd1b2c8d1e9e",
      "failureCodes": [],
      "statusChangeReason": []
    }
  }
}
```

# Examples: S3 Batch Operations completion reports
Examples of completion reports

When you create an S3 Batch Operations job, you can request a completion report for all tasks or just for failed tasks. As long as at least one task has been invoked successfully, S3 Batch Operations generates a report for jobs that have completed, failed, or been canceled.

The completion report contains additional information for each task, including the object key name and version, status, error codes, and descriptions of any errors. The description of errors for each failed task can be used to diagnose issues that occur during job creation, such as permissions. For **Compute checksum** jobs, the completion report contains checksum values for every object.

**Note**  
Completion reports are always encrypted with Amazon S3 managed keys (SSE-S3).

**Example — Top-level manifest result file**  
The top-level `manifest.json` file contains the locations of each succeeded report and (if the job had any failures) the location of failed reports, as shown in the following example.  

```
{
    "Format": "Report_CSV_20180820",
    "ReportCreationDate": "2019-04-05T17:48:39.725Z",
    "Results": [
        {
            "TaskExecutionStatus": "succeeded",
            "Bucket": "my-job-reports",
            "MD5Checksum": "83b1c4cbe93fc893f54053697e10fd6e",
            "Key": "job-f8fb9d89-a3aa-461d-bddc-ea6a1b131955/results/6217b0fab0de85c408b4be96aeaca9b195a7daa5.csv"
        },
        {
            "TaskExecutionStatus": "failed",
            "Bucket": "my-job-reports",
            "MD5Checksum": "22ee037f3515975f7719699e5c416eaa",
            "Key": "job-f8fb9d89-a3aa-461d-bddc-ea6a1b131955/results/b2ddad417e94331e9f37b44f1faf8c7ed5873f2e.csv"
        }
    ],
    "ReportSchema": "Bucket, Key, VersionId, TaskStatus, ErrorCode, HTTPStatusCode, ResultMessage"
}
```

**Succeeded task reports**

Succeeded tasks reports contain the following for the *successful* tasks:
+ `Bucket`
+ `Key`
+ `VersionId`
+ `TaskStatus`
+ `ErrorCode`
+ `HTTPStatusCode`
+ `ResultMessage`

**Failed task reports**

Failed task reports contain the following information for all *failed* tasks:
+ `Bucket`
+ `Key`
+ `VersionId`
+ `TaskStatus`
+ `ErrorCode`
+ `HTTPStatusCode`
+ `ResultMessage`

**Example — Lambda function task report**  
In the following example, the Lambda function successfully copied the Amazon S3 object to another bucket. The returned Amazon S3 response is passed back to S3 Batch Operations and is then written into the final completion report.  

```
amzn-s3-demo-bucket1,image_17775,,succeeded,200,,"{u'CopySourceVersionId': 'xVR78haVKlRnurYofbTfYr3ufYbktF8h', u'CopyObjectResult': {u'LastModified': datetime.datetime(2019, 4, 5, 17, 35, 39, tzinfo=tzlocal()), u'ETag': '""fe66f4390c50f29798f040d7aae72784""'}, 'ResponseMetadata': {'HTTPStatusCode': 200, 'RetryAttempts': 0, 'HostId': 'nXNaClIMxEJzWNmeMNQV2KpjbaCJLn0OGoXWZpuVOFS/iQYWxb3QtTvzX9SVfx2lA3oTKLwImKw=', 'RequestId': '3ED5852152014362', 'HTTPHeaders': {'content-length': '234', 'x-amz-id-2': 'nXNaClIMxEJzWNmeMNQV2KpjbaCJLn0OGoXWZpuVOFS/iQYWxb3QtTvzX9SVfx2lA3oTKLwImKw=', 'x-amz-copy-source-version-id': 'xVR78haVKlRnurYofbTfYr3ufYbktF8h', 'server': 'AmazonS3', 'x-amz-request-id': '3ED5852152014362', 'date': 'Fri, 05 Apr 2019 17:35:39 GMT', 'content-type': 'application/xml'}}}"
amzn-s3-demo-bucket1,image_17763,,succeeded,200,,"{u'CopySourceVersionId': '6HjOUSim4Wj6BTcbxToXW44pSZ.40pwq', u'CopyObjectResult': {u'LastModified': datetime.datetime(2019, 4, 5, 17, 35, 39, tzinfo=tzlocal()), u'ETag': '""fe66f4390c50f29798f040d7aae72784""'}, 'ResponseMetadata': {'HTTPStatusCode': 200, 'RetryAttempts': 0, 'HostId': 'GiCZNYr8LHd/Thyk6beTRP96IGZk2sYxujLe13TuuLpq6U2RD3we0YoluuIdm1PRvkMwnEW1aFc=', 'RequestId': '1BC9F5B1B95D7000', 'HTTPHeaders': {'content-length': '234', 'x-amz-id-2': 'GiCZNYr8LHd/Thyk6beTRP96IGZk2sYxujLe13TuuLpq6U2RD3we0YoluuIdm1PRvkMwnEW1aFc=', 'x-amz-copy-source-version-id': '6HjOUSim4Wj6BTcbxToXW44pSZ.40pwq', 'server': 'AmazonS3', 'x-amz-request-id': '1BC9F5B1B95D7000', 'date': 'Fri, 05 Apr 2019 17:35:39 GMT', 'content-type': 'application/xml'}}}"
amzn-s3-demo-bucket1,image_17860,,succeeded,200,,"{u'CopySourceVersionId': 'm.MDD0g_QsUnYZ8TBzVFrp.TmjN8PJyX', u'CopyObjectResult': {u'LastModified': datetime.datetime(2019, 4, 5, 17, 35, 40, tzinfo=tzlocal()), u'ETag': '""fe66f4390c50f29798f040d7aae72784""'}, 'ResponseMetadata': {'HTTPStatusCode': 200, 'RetryAttempts': 0, 'HostId': 'F9ooZOgpE5g9sNgBZxjdiPHqB4+0DNWgj3qbsir+sKai4fv7rQEcF2fBN1VeeFc2WH45a9ygb2g=', 'RequestId': '8D9CA56A56813DF3', 'HTTPHeaders': {'content-length': '234', 'x-amz-id-2': 'F9ooZOgpE5g9sNgBZxjdiPHqB4+0DNWgj3qbsir+sKai4fv7rQEcF2fBN1VeeFc2WH45a9ygb2g=', 'x-amz-copy-source-version-id': 'm.MDD0g_QsUnYZ8TBzVFrp.TmjN8PJyX', 'server': 'AmazonS3', 'x-amz-request-id': '8D9CA56A56813DF3', 'date': 'Fri, 05 Apr 2019 17:35:40 GMT', 'content-type': 'application/xml'}}}"
```
The following example report shows a case in which the AWS Lambda function timed out, causing failures to exceed the failure threshold. It is then marked as a `PermanentFailure`.  

```
amzn-s3-demo-bucket1,image_14975,,failed,200,PermanentFailure,"Lambda returned function error: {""errorMessage"":""2019-04-05T17:35:21.155Z 2845ca0d-38d9-4c4b-abcf-379dc749c452 Task timed out after 3.00 seconds""}"
amzn-s3-demo-bucket1,image_15897,,failed,200,PermanentFailure,"Lambda returned function error: {""errorMessage"":""2019-04-05T17:35:29.610Z 2d0a330b-de9b-425f-b511-29232fde5fe4 Task timed out after 3.00 seconds""}"
amzn-s3-demo-bucket1,image_14819,,failed,200,PermanentFailure,"Lambda returned function error: {""errorMessage"":""2019-04-05T17:35:22.362Z fcf5efde-74d4-4e6d-b37a-c7f18827f551 Task timed out after 3.00 seconds""}"
amzn-s3-demo-bucket1,image_15930,,failed,200,PermanentFailure,"Lambda returned function error: {""errorMessage"":""2019-04-05T17:35:29.809Z 3dd5b57c-4a4a-48aa-8a35-cbf027b7957e Task timed out after 3.00 seconds""}"
amzn-s3-demo-bucket1,image_17644,,failed,200,PermanentFailure,"Lambda returned function error: {""errorMessage"":""2019-04-05T17:35:46.025Z 10a764e4-2b26-4d8c-9056-1e1072b4723f Task timed out after 3.00 seconds""}"
amzn-s3-demo-bucket1,image_17398,,failed,200,PermanentFailure,"Lambda returned function error: {""errorMessage"":""2019-04-05T17:35:44.661Z 1e306352-4c54-4eba-aee8-4d02f8c0235c Task timed out after 3.00 seconds""}"
```

**Example — Compute checksum task report**  
In the following example, the **Compute checksum** operation successfully calculated the checksum for the uploaded object while at rest. The returned Amazon S3 response is passed back to S3 Batch Operations, and is then written into the final completion report:  

```
amzn-s3-demo-bucket1,s3-standard-1mb-test-object,,succeeded,200,,"{""checksum_base64"":""bS9TOQ\u003d\u003d"",""etag"":""3c3c1813042989094598e4b57ecbdc82"",""checksumAlgorithm"":""CRC32"",""checksumType"":""FULL_OBJECT"",""checksum_hex"":""6D2F5339""}"
```
The following example report shows what happens when a **Compute checksum** operation fails, resulting in a failed task report:  

```
amzn-s3-demo-bucket1,image_14975,,failed,200,PermanentFailure,"error details: {""failureMessage"":"Task 2845ca0d-38d9-4c4b-abcf-379dc749c452 SSE-C encryption type is not supported for this operation", ""errorCode"": "400"}"
amzn-s3-demo-bucket1,image_14975,,failed,200,PermanentFailure,"error details: {""failureMessage"":"Task 2845ca0d-38d9-4c4b-abcf-379dc749c452 Key not found", ""errorCode"": "404"}"
amzn-s3-demo-bucket1,image_14975,,failed,200,PermanentFailure,"error details: {""failureMessage"":"Task 2845ca0d-38d9-4c4b-abcf-379dc749c452 Internal server error, please retry", ""errorCode"": "500"}"
```