

# Starting state machine executions in Step Functions
Starting state machines

A state machine *execution* occurs when an AWS Step Functions state machine runs and performs its tasks. Each Step Functions state machine can have multiple simultaneous executions, which you can initiate from the [Step Functions console](https://console.aws.amazon.com/states/home?region=us-east-1#/), or by using the AWS SDKs, the Step Functions API actions, or the AWS Command Line Interface (AWS CLI). An execution receives JSON input and produces JSON output. You can start a Step Functions execution in the following ways:
+ Start an execution in the Step Functions console.

  You can start a state machine in the console, watch the execution, and debug failures.
+ Call the [StartExecution](https://docs.aws.amazon.com/step-functions/latest/apireference/API_StartExecution.html) API action.
+ Use Amazon EventBridge to [start an execution](tutorial-cloudwatch-events-s3.md) in response to an event.
+ Use Amazon EventBridge Scheduler to [start a state machine execution](using-eventbridge-scheduler.md) on a schedule.
+ Start a [nested workflow execution](concepts-nested-workflows.md) from a Task state.
+ Start an execution with [Amazon API Gateway](tutorial-api-gateway.md).

**Tip**  
To learn how to monitor running executions, see the tutorial: [Examining state machine executions in Step Functions](debug-sm-exec-using-ui.md)

# Start workflow executions from a task state in Step Functions
Start from a Task

AWS Step Functions can start workflow executions directly from a `Task` state of a state machine. This allows you to break your workflows into smaller state machines, and to start executions of these other state machines. By starting these new workflow executions you can:
+ Separate higher level workflow from lower level, task-specific workflows.
+ Avoid repetitive elements by calling a separate state machine multiple times.
+ Create a library of modular reusable workflows for faster development.
+ Reduce complexity and make it easier to edit and troubleshoot state machines.

Step Functions can start these workflow executions by calling its own API as an [integrated service](integrate-services.md). Simply call the `StartExecution` API action from your `Task` state and pass the necessary parameters. You can call the Step Functions API using any of the [service integration patterns](connect-to-resource.md).

**Tip**  
To deploy an example nested workflow, see [Optimizing costs](https://catalog.workshops.aws/stepfunctions/nested-workflow) in *The AWS Step Functions Workshop*.

To start a new execution of a state machine, use a `Task` state similar to the following example:

```
{  
   "Type":"Task",
   "Resource":"arn:aws:states:::states:startExecution",
   "Parameters":{  
      "StateMachineArn":"arn:aws:states:region:account-id:stateMachine:HelloWorld",
      "Input":{  
         "Comment":"Hello world!"
      },
   },
   "Retry":[  
      {  
        "ErrorEquals":[  
            "StepFunctions.ExecutionLimitExceeded"
        ]
      }
   ],
   "End":true
}
```

This `Task` state will start a new execution of the `HelloWorld` state machine, and will pass the JSON comment as input.

**Note**  
The `StartExecution` API action quotas can limit the number of executions that you can start. Use the `Retry` on `StepFunctions.ExecutionLimitExceeded` to ensure your execution is started. See the following.  
[Quotas related to API action throttling](service-quotas.md#service-limits-api-action-throttling-general)
[Handling errors in Step Functions workflows](concepts-error-handling.md)

## Associate Workflow Executions


To associate a started workflow execution with the execution that started it, pass the execution ID from the [Context object](input-output-contextobject.md) to the execution input. You can access the ID from the Context object from your `Task` state in a running execution. Pass the execution ID by appending `.$` to the parameter name, and referencing the ID in the Context object with `$$.Execution.Id`.

```
"AWS_STEP_FUNCTIONS_STARTED_BY_EXECUTION_ID.$": "$$.Execution.Id"
```

You can use a special parameter named `AWS_STEP_FUNCTIONS_STARTED_BY_EXECUTION_ID` when you start an execution. If included, this association provides links in the **Step details** section of the Step Functions console. When provided, you can easily trace the executions of your workflows from starting executions to their started workflow executions. Using the previous example, associate the execution ID with the started execution of the `HelloWorld` state machine, as follows.

```
{  
   "Type":"Task",
   "Resource":"arn:aws:states:::states:startExecution",
   "Parameters":{  
      "StateMachineArn":"arn:aws:states:region:account-id:stateMachine:HelloWorld",
      "Input": {
        "Comment": "Hello world!",
        "AWS_STEP_FUNCTIONS_STARTED_BY_EXECUTION_ID.$": "$$.Execution.Id"
       }
   },
   "End":true
}
```

For more information, see the following:
+ [Integrating services](integrate-services.md)
+ [Passing parameters to a service API in Step Functions](connect-parameters.md)
+ [Accessing the Context object](input-output-contextobject.md#contextobject-access)
+ [AWS Step Functions](connect-stepfunctions.md)

# Using Amazon EventBridge Scheduler to start a Step Functions state machine execution
Using EventBridge Scheduler

[Amazon EventBridge Scheduler](https://docs.aws.amazon.com/scheduler/latest/UserGuide/what-is-scheduler.html) is a serverless scheduler that allows you to create, run, and manage tasks from one central, managed service. With EventBridge Scheduler, you can create schedules using cron and rate expressions for recurring patterns, or configure one-time invocations. You can set up flexible time windows for delivery, define retry limits, and set the maximum retention time for failed API invocations.

For example, with EventBridge Scheduler, you can start a state machine execution on a schedule when a security related event occurs or to automate a data processing job.

This page explains how to use EventBridge Scheduler to start execution of a Step Functions state machine on a schedule.

**Topics**
+ [

## Set up the execution role
](#using-eventbridge-scheduler-execution-role)
+ [

## Create a schedule
](#using-eventbridge-scheduler-create)
+ [

## Related resources
](#using-eventbridge-scheduler-related-resources)

## Set up the execution role


 When you create a new schedule, EventBridge Scheduler must have permission to invoke its target API operation on your behalf. You grant these permissions to EventBridge Scheduler using an *execution role*. The permission policy you attach to your schedule's execution role defines the required permissions. These permissions depend on the target API you want EventBridge Scheduler to invoke.

 When you use the EventBridge Scheduler console to create a schedule, as in the following procedure, EventBridge Scheduler automatically sets up an execution role based on your selected target. If you want to create a schedule using one of the EventBridge Scheduler SDKs, the AWS CLI, or CloudFormation, you must have an existing execution role that grants the permissions EventBridge Scheduler requires to invoke a target. For more information about manually setting up an execution role for your schedule, see [Setting up an execution role](https://docs.aws.amazon.com/scheduler/latest/UserGuide/setting-up.html#setting-up-execution-role) in the *EventBridge Scheduler User Guide*. 

## Create a schedule


**To create a schedule by using the console**

1. Open the Amazon EventBridge Scheduler console at [https://console.aws.amazon.com/scheduler/home](https://console.aws.amazon.com/scheduler/home/).

1.  On the **Schedules** page, choose **Create schedule**. 

1.  On the **Specify schedule detail** page, in the **Schedule name and description** section, do the following: 

   1. For **Schedule name**, enter a name for your schedule. For example, **MyTestSchedule**. 

   1. (Optional) For **Description**, enter a description for your schedule. For example, **My first schedule**.

   1. For **Schedule group**, choose a schedule group from the dropdown list. If you don't have a group, choose **default**. To create a schedule group, choose **create your own schedule**. 

      You use schedule groups to add tags to groups of schedules. 

1. 

   1. Choose your schedule options.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/step-functions/latest/dg/using-eventbridge-scheduler.html)

1. (Optional) If you chose **Recurring schedule** in the previous step, in the **Timeframe** section, do the following: 

   1. For **Timezone**, choose a timezone. 

   1. For **Start date and time**, enter a valid date in `YYYY/MM/DD` format, and then specify a timestamp in 24-hour `hh:mm` format. 

   1. For **End date and time**, enter a valid date in `YYYY/MM/DD` format, and then specify a timestamp in 24-hour `hh:mm` format. 

1. Choose **Next**. 

1. On the **Select target** page, choose the AWS API operation that EventBridge Scheduler invokes: 

   1. Choose **AWS Step Functions StartExecution**.

   1. In the **StartExecution** section, select a state machine or choose **Create new state machine**.

      Currently, you can't run Synchronous Express workflows on a schedule.

   1. Enter a JSON payload for the execution. Even if your state machine doesn't require any JSON payload, you must still include input in JSON format as shown in the following example.

      ```
      {
          "Comment": "sampleJSONData"
      }
      ```

1. Choose **Next**. 

1. On the **Settings** page, do the following: 

   1. To turn on the schedule, under **Schedule state**, toggle **Enable schedule**. 

   1. To configure a retry policy for your schedule, under **Retry policy and dead-letter queue (DLQ)**, do the following:
      + Toggle **Retry**.
      + For **Maximum age of event**, enter the maximum **hour(s)** and **min(s)** that EventBridge Scheduler must keep an unprocessed event.
      + The maximum time is 24 hours.
      + For **Maximum retries**, enter the maximum number of times EventBridge Scheduler retries the schedule if the target returns an error. 

         The maximum value is 185 retries. 

      With retry policies, if a schedule fails to invoke its target, EventBridge Scheduler re-runs the schedule. If configured, you must set the maximum retention time and retries for the schedule.

   1. Choose where EventBridge Scheduler stores undelivered events.     
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/step-functions/latest/dg/using-eventbridge-scheduler.html)

   1. To use a customer managed key to encrypt your target input, under **Encryption**, choose **Customize encryption settings (advanced)**. 

      If you choose this option, enter an existing KMS key ARN or choose **Create an AWS KMS key** to navigate to the AWS KMS console. For more information about how EventBridge Scheduler encrypts your data at rest, see [Encryption at rest](https://docs.aws.amazon.com/scheduler/latest/UserGuide/encryption-rest.html) in the *Amazon EventBridge Scheduler User Guide*. 

   1. To have EventBridge Scheduler create a new execution role for you, choose **Create new role for this schedule**. Then, enter a name for **Role name**. If you choose this option, EventBridge Scheduler attaches the required permissions necessary for your templated target to the role.

1. Choose **Next**. 

1.  In the **Review and create schedule** page, review the details of your schedule. In each section, choose **Edit** to go back to that step and edit its details. 

1. Choose **Create schedule**. 

   You can view a list of your new and existing schedules on the **Schedules** page. Under the **Status** column, verify that your new schedule is **Enabled**. 

To confirm that EventBridge Scheduler invoked the state machine, check the [state machine's Amazon CloudWatch logs](cw-logs.md).

## Related resources


 For more information about EventBridge Scheduler, see the following: 
+ [EventBridge Scheduler User Guide](https://docs.aws.amazon.com/scheduler/latest/UserGuide/what-is-scheduler.html)
+ [EventBridge Scheduler API Reference](https://docs.aws.amazon.com/scheduler/latest/APIReference/Welcome.html)
+ [EventBridge Scheduler Pricing](https://aws.amazon.com/eventbridge/pricing/#Scheduler)

# Viewing execution details in the Step Functions console
Viewing workflow runs

You can view in-progress and past executions of workflows in the *Executions* section of the Step Functions console. 

In the *Executions* details, you can view the state machine’s definition, execution status, ARN, number of state transitions, and the inputs and outputs for individual states in the workflow. 

![\[Illustrative screenshot showing a list of executions.\]](http://docs.aws.amazon.com/step-functions/latest/dg/images/view-executions.png)


Standard workflow execution details are recorded in Step Functions, but the history of Express workflow executions are not. To record Express workflow executions, you must configure your Express state machines to send logs to Amazon CloudWatch. See [Logging in CloudWatch Logs](cw-logs.md) to set up logging for Step Functions.

The console experience to view both types of workflow executions is similar, but there are some limitations for Express workflows. See [Standard and Express console experience differences](#console-exp-differences). 

**Note**  
Because execution data for Express workflows are displayed using CloudWatch Logs Insights, scanning the logs will incur charges. By default, your log group only lists executions completed in the last three hours. If you specify a larger time range that includes more execution events, your costs will increase. For more information, see **Vended Logs** under the **Logs** tab on the [CloudWatch Pricing page](https://aws.amazon.com/cloudwatch/pricing).

## Execution details overview
Execution details

The execution details link and page title use the unique execution ID generated by Step Functions or the custom ID you provided when starting the workflow. The *Execution Details* page includes metrics and the following options to manage your state machine: 
+ **Stop execution** – Stop an in-progress execution. (Unavailable for completed executions.)
+ **Start new execution** – Start a new execution of your state machine
+ **Redrive** – Redrive executions of Standard Workflows that did not complete successfully in the last 14 days, including failed, aborted, or timed out executions. For more information, see [Redriving state machines](redrive-executions.md).
+ **Export** – Export the execution details in JSON format to share or perform offline analysis.

**Viewing executions started with a version or alias**  
You can also view the executions started with a version or an alias in the Step Functions console. For more information, see [Listing executions for versions and aliases](execution-alias-version-associate.md#view-version-alias-executions).

The *Execution Details* console page contains the following sections:

1. [Execution summary](#exec-details-intf-exec-summ)

1. [Error message](#exec-details-intf-error-banner)

1. [View mode](#exec-details-intf-sm-wf-view)

1. [Step details](#exec-details-intf-step-details)

1. [Events](#exec-details-intf-events)

### Execution summary


The *Execution summary* provides an overview of the execution details of your workflow, in the following tabs:

**Details**  
Shows information, such as the execution's status, ARN, and timestamps for execution start and end time. You can also view the total count of **State transitions** that occurred while running the state machine execution. You can also view the links for **X-Ray trace map** and Amazon CloudWatch **Execution Logs** if you enabled tracing or logs for your state machine.  
If your state machine execution was started by another state machine, you can view the link for the parent state machine on this tab.  
If your state machine execution was [redriven](redrive-executions.md), this tab displays redrive related information, for example **Redrive count**.

**Execution input and output**  
Shows the state machine execution input and output side-by-side.

**Definition**  
Shows the state machine's Amazon States Language definition.

### Error message


If your state machine execution failed, the *Execution Details* page displays an error message. Choose **Cause** or **View step details** in the error message to view the reason for execution failure or the step that caused the error.

If you choose **View step details**, Step Functions highlights the step that caused the error in the [Step details](#exec-details-intf-step-details), [Graph view](#exec-details-intf-sm-wf-view), and [Table view](#exec-details-intf-sm-wf-view) tabs. If the step is a Task, Map, or Parallel state for which you've defined retries, the **Step details** pane displays the **Retry** tab for the step. Additionally, if you've redriven the execution, you can see the retries and redrive execution details in the **Retries & redrives** tab of the **Step details** pane.

From the **Recover** dropdown button on this error message, you can either redrive your unsuccessful executions or start a new execution. For more information, see [Redriving state machines](redrive-executions.md).

The error message for a failed state machine execution will be displayed on the *Execution Details* page. The error message will also have a link to the step that caused the execution failure.

### View mode


The *View mode* section contains two different visualizations for your state machine. You can choose to view a graphic representation of the workflow, a table outlining the states in your workflow, or a list of the events associated with your state machine's execution:

#### Graph view


The **Graph view** mode displays a graphical representation of your workflow. A legend is included at the bottom that indicates the execution status of the state machine. It also contains buttons that let you zoom in, zoom out, center align the full workflow, or view the workflow in full-screen mode.

From the graph view, you can choose any step in your workflow to view details about its execution in the *[Step details](#exec-details-intf-step-details)* component. When you chose a step in the **Graph view**, the **Table view** also shows that step. This is true in reverse as well. If you choose a step from **Table view**, the **Graph view** shows the same step.

If your state machine contains a `Map` state, `Parallel` state, or both, you can view their names in the workflow in the **Graph view**. In addition, for the `Map` state, the **Graph view** lets you move across different iterations of the **Map** state execution data. For example, if your **Map** state has five iterations and you want to view the execution data for the third and fourth iterations, do the following:

1. Choose the **Map** state whose iteration data you want to view.

1. From **Map iteration viewer**, choose **\$12** from the dropdown list for third iteration. This is because iterations are counted from zero. Likewise, choose **\$13** from the dropdown list for the fourth iteration of the **Map** state.

   Alternatively, use the up arrow icon and down arrow icon controls to move between different iterations of the **Map** state.
**Note**  
If your state machine contains nested `Map` states, the dropdown lists for the parent and child `Map` state iterations will be displayed to represent the iteration data.

1. (Optional) If one or more of your **Map** state iterations failed to execute, or the execution was stopped, you can view its data by choosing those iteration numbers under **Failed** or **Aborted** in the dropdown list.

Finally, you can use the **Export** and **Layout** buttons to export the workflow graph as an SVG or PNG image. You can also switch between horizontal and vertical views of your workflow.

#### Table view


The **Table view** mode displays a tabular representation of the states in your workflow. In this *View mode*, you can see the details of each state that was executed in your workflow, including its name, the name of any resource it used (such as an AWS Lambda function), and if the state executed successfully.

From this view, you can choose any state in your workflow to view details about its execution in the *[Step details](#exec-details-intf-step-details)* component. When you chose a step in the **Table view**, the **Graph view** also shows that step. This is true in reverse as well. If you choose a step from **Graph view**, the **Table view** shows the same step.

You can also limit the amount of data displayed in the **Table view** mode by applying filters to the view. You can create a filter for a specific property, such as **Status** or **Redrive attempt**. For more information, see [Examine executions](debug-sm-exec-using-ui.md).

By default, this mode displays the **Name**, **Type**, **Status**, **Resource**, and **Started After** columns. You can configure the columns you want to view using the **Preferences** dialog box. The selections that you make on this dialog box persist for future state machine executions until they are changed again.

If you add the **Timeline** column, the execution duration of each state is shown with respect to the runtime for the entire execution. This is displayed as a color-coded, linear timeline. This can help you identify any performance-related issues with a specific state's execution. The color-coded segments for each state on the timeline help you identify the state's execution status, such as in-progress, failed, or aborted.

For example, if you've defined execution retries for a state in your state machine, these retries are shown in the timeline. Red segments represent the failed `Retry` attempts, while light gray segments represent the `BackoffRate` between each `Retry` attempt.

![\[Screenshot of the table view with color-coded segments on the timeline.\]](http://docs.aws.amazon.com/step-functions/latest/dg/images/sm-table-view-timeline-color-codes.png)


If your state machine contains a `Map` state, `Parallel` state, or both, you can view their names in the workflow in **Table view**. For `Map` and `Parallel` states, the **Table view** mode displays the execution data for their iterations and parallel branches as nodes inside a tree view. You can choose each node in these states to view their individual details in the *[Step details](#exec-details-intf-step-details)* section. For example, you can review the data for a specific **Map** state iteration that caused the state to fail. Expand the node for the **Map** state, and then view the status for each iteration in the **Status** column.

### Step details


The *Step details* section opens up on the right when you choose a state in the **Graph view** or **Table view**. This section contains the following tabs, which provide you in-depth information about the selected state:

**Input**  
Shows the input details of the selected state. If there is an error in the input, it is indicated with a error icon on the tab header. In addition, you can view the reason for the error in this tab.  
You can also choose the **Advanced view** toggle button to see the input data transfer path as the data passed through the selected state. This lets you identify how your input was processed as one or more of the fields, such as `InputPath`, `Parameters`, `ResultSelector`, `OutputPath`, and `ResultPath`, were applied to the data.

**Output**  
Shows the output of the selected state. If there is an error in the output, it is indicated with an error icon on the tab header. In addition, you can view the reason for the error in the this tab.  
You can also choose the **Advanced view** toggle button to see the output data transfer path as the data passed through the selected state. This lets you identify how your input was processed as one or more of the fields, such as `InputPath`, `Parameters`, `ResultSelector`, `OutputPath`, and `ResultPath`, were applied to the data.

**Details**  
Shows information, such as the state type, its execution status, and execution duration.  
For `Task` states that use a resource, such as AWS Lambda, this tab provides links to the resource definition page and Amazon CloudWatch logs page for the resource invocation. It also shows values, if specified, for the `Task` state's `TimeoutSeconds` and `HeartbeatSeconds` fields.  
For `Map` states, this tab shows you information regarding the total count of a `Map` state's iterations. Iterations are categorized as **Failed**, **Aborted**, **Succeeded**, or **InProgress**.

**Definition**  
Shows the Amazon States Language definition corresponding to the selected state.

**Retry**  
This tab appears only if you have defined a `Retry` field in your state machine's `Task` or `Parallel` state.
Shows the initial and subsequent retry attempts for a selected state in its original execution attempt. For the initial and all the subsequent failed attempts, choose the arrow icon next to **Type** to view the **Reason** for failure that appears in a dropdown box. If the retry attempt succeeds, you can view the **Output** that appears in a dropdown box.  
If you've redriven your execution, this tab header displays the name **Retries & redrives** and displays the retry attempt details for each redrive.

**Events**  
Shows a filtered list of the events associated with the selected state in an execution. The information you see on this tab is a subset of the complete execution event history you see in the *[Events](#exec-details-intf-events)* table.

### Events


The **Events** table displays the complete history for the selected execution as a list of events spanning multiple pages. Each page contains up to 25 events. This section also displays the total event count, which can help you determine if you exceeded the maximum event history count of 25,000 events.

![\[Example screenshot showing a partial event history for a workflow execution.\]](http://docs.aws.amazon.com/step-functions/latest/dg/images/sm-exec-details-event-view.png)


By default, the results in the **Events** table are displayed in ascending order based on the **Timestamp** of the events. You can change the execution event history's sorting to descending order by clicking on the **Timestamp** column header.

In the **Events** table, each event is color-coded to indicate its execution status. For example, events that failed appear in red. To view additional details about an event, choose the arrow icon next to the event ID. Once open, the event details show the input, output, and resource invocation for the event.

In addition, in the **Events** table, you can apply filters to limit the execution event history results that are displayed. You can choose properties such as **ID**, or **Redrive attempt**. For more information, see [Examine executions](debug-sm-exec-using-ui.md).

## Standard and Express console experience differences
Standard and Express differences

**Standard workflows**  
The execution histories for Standard Workflows are always available for executions completed in the last 90 days.

**Express workflows**  
For Express workflows, the Step Functions console retrieves log data gathered through a CloudWatch Logs log group to show execution history. The histories for executions completed in the last **three hours** are available by default. You can customize the time range. If you specify a larger time range which includes more execution events, the cost to scan the logs will increase. For more information, see **Vended Logs** under the **Logs** tab on the [CloudWatch Pricing page](https://aws.amazon.com/cloudwatch/pricing) and [Logging in CloudWatch Logs](cw-logs.md).

## Considerations and limitations for viewing Express workflow executions
Limitations viewing Express workflow executions

When viewing Express workflow executions on the Step Functions console, keep in mind the following considerations and limitations:

### Availability of Express workflow execution details relies on Amazon CloudWatch Logs


For Express workflows, their execution history and detailed execution information are gathered through CloudWatch Logs Insights. This information is kept in the CloudWatch Logs log group that you specify when you create the state machine. The state machine's execution history is shown under the **Executions** tab on the Step Functions console.

**Warning**  
If you delete the CloudWatch Logs for an Express workflow, it won't be listed under the **Executions** tab.

We recommend that you use the default log level of **ALL** for logging all execution event types. You can update the log level as required for your existing state machines when you edit them. For more information, see [Using CloudWatch Logs to log execution history in Step Functions](cw-logs.md) and [Event log levels](cw-logs.md#cloudwatch-log-level).

### Partial Express workflow execution details are available if logging level is ERROR or FATAL


By default, the logging level for Express workflow executions is set to **ALL**. If you change the log level, the execution histories and execution details for completed executions won’t be affected. However, all new executions will emit logs based on the updated log level. For more information, see [Using CloudWatch Logs to log execution history in Step Functions](cw-logs.md) and [Event log levels](cw-logs.md#cloudwatch-log-level).

For example, if you change the log level from **ALL** to either **ERROR** or **FATAL**, the **Executions** tab on the Step Functions console only lists failed executions. In the **Event view** tab, the console shows only the event details for the state machine steps that failed.

We recommend that you use the default log level of **ALL** for logging all execution event types. You can update the log level as required for your existing state machines when you edit the state machine.

### State machine definition for a prior execution can't be viewed after the state machine has been modified


State machine definitions for past executions are not stored for Express workflows. If you change your state machine definition, you can only view the state machine definition for executions using the most current definition.

For example, if you remove one or more steps from your state machine definition, Step Functions detects a mismatch between the definition and prior execution events. Because previous definitions are not stored for Express workflows, Step Functions can't display the state machine definition for executions run on an earlier version of the state machine definition. As a result, the **Definition**, **Graph view**, and **Table view** tabs are unavailable for executions run on previous versions of a state machine definition.

# Restarting state machine executions with redrive in Step Functions
Redriving state machines

You can use redrive to restart executions of Standard Workflows that didn't complete successfully in the last 14 days. These include failed, aborted, or timed out executions.

When you redrive an execution, Step Functions continues the failed execution from the unsuccessful step and uses the same input. Step Functions preserves the results and execution history of the successful steps, which are not rerun when you redrive an execution. For example, say that your workflow contains two states: a [Pass workflow state](state-pass.md) state followed by a [Task workflow state](state-task.md) state. If your workflow execution fails at the Task state, and you redrive the execution, the execution reschedules and then reruns the Task state.

Redriven executions use the same state machine definition and execution ARN that was used for the original execution attempt. If your original execution attempt was associated with a [version](concepts-state-machine-version.md), [alias](concepts-state-machine-alias.md), or both, the redriven execution is associated with the same version, alias, or both. Even if you update your alias to point to a different version, the redriven execution continues to use the version associated with the original execution attempt. Because redriven executions use the same state machine definition, you must start a new execution if you update your state machine definition.

When you redrive an execution, the state machine level timeout, if defined, is reset to 0. For more information about state machine level timeout, see `TimeoutSeconds`. 

Execution redrives are considered as state transitions. For information about how state transitions affect billing, see [Step Functions Pricing](https://aws.amazon.com/step-functions/pricing/).

## Redrive eligibility for unsuccessful executions


You can redrive executions if your original execution attempt meets the following conditions:
+ You started the execution on or after November 15, 2023. Executions that you started prior to this date aren't eligible for redrive.
+ The execution status isn't `SUCCEEDED`.
+ The workflow execution hasn't exceeded the redrivable period of 14 days. Redrivable period refers to the time during which you can redrive a given execution. This period starts from the day a state machine completes its execution.
+ The workflow execution hasn't exceeded the maximum open time of one year. For information about state machine execution quotas, see [Quotas related to state machine executions](service-quotas.md#service-limits-state-machine-executions).
+ The execution event history count is less than 24,999. Redriven executions append their event history to the existing event history. Make sure your workflow execution contains less than 24,999 events to accommodate the `ExecutionRedriven` history event and at least one other history event.

## Redrive behavior of individual states


Depending on the state that failed in your workflow, the redrive behavior for all unsuccessful states varies. The following table describes the redrive behavior for all the states.


| State name | Redrive execution behavior | 
| --- | --- | 
| [Pass workflow state](state-pass.md) |  If a preceding step fails or the state machine times out, the Pass state is exited and isn't executed on redrive.  | 
| [Task workflow state](state-task.md) |  Schedules and starts the Task state again. When you redrive an execution that reruns a Task state, the `TimeoutSeconds` for the state, if defined, is reset to 0. For more information about timeout, see [Task state](state-task.md#task-state-fields).  | 
| [Choice workflow state](state-choice.md) | Reevaluates the Choice state rules. | 
| [Wait workflow state](state-wait.md) |  If the state specifies `Timestamp` or `TimestampPath` that refers to a timestamp in the past, redrive causes the Wait state to be exited and enters the state specified in the `Next` field.  | 
| [Succeed workflow state](state-succeed.md) |  Doesn't redrive state machine executions that enter the Succeed state.  | 
| [Fail workflow state](state-fail.md) |  Reenters the Fail state and fails again.  | 
| [Parallel workflow state](state-parallel.md) |  Reschedules and redrives only those branches that failed or aborted. If the state failed because of a `States.DataLimitExceeded` error, the Parallel state is rerun, including the branches that were successful in the original execution attempt.  | 
| [Inline Map state](state-map-inline.md) |  Reschedules and redrives only those iterations that failed or aborted. If the state failed because of a `States.DataLimitExceeded` error, the Inline Map state is rerun, including the iterations that were successful in the original execution attempt.  | 
| [Distributed Map state](state-map-distributed.md) |  redrives the unsuccessful child workflow executions in a [Map Run](concepts-examine-map-run.md). For more information, see [Redriving Map Runs in Step Functions executions](redrive-map-run.md). If the state failed because of a `States.DataLimitExceeded` error, the Distributed Map state is rerun. This includes the child workflows that were successful in the original execution attempt.  | 

## IAM permission to redrive an execution


Step Functions needs appropriate permission to redrive an execution. The following IAM policy example grants the least privilege required to your state machine for redriving an execution. Remember to replace the *italicized* text with your resource-specific information.

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "states:RedriveExecution"
            ],
            "Resource": "arn:aws:states:us-east-1:123456789012:execution:myStateMachine:*"
        }
    ]
}
```

For an example of the permission you need to redrive a Map Run, see [Example of IAM policy for redriving a Distributed Map](iam-policies-eg-dist-map.md#iam-policy-redrive-dist-map).

## Redriving executions in console


You can redrive eligible executions from the Step Functions console.

For example, imagine that you run a state machine and a parallel state fails to run.

The following image shows a **Lambda Invoke** step named **Do square number** inside a **Parallel** state has returned and failed. This caused the **Parallel** state to fail as well. The branches whose execution were in progress or not started are stopped and the state machine execution fails.

![\[Example graph of a failed state machine execution.\]](http://docs.aws.amazon.com/step-functions/latest/dg/images/redrive-eg-failed-workflow.png)


**To redrive an execution from the console**

1. Open the [Step Functions console](https://console.aws.amazon.com/states/home?region=us-east-1#/), and then choose an existing state machine that failed execution.

1. On the state machine detail page, under **Executions**, choose a failed execution instance.

1. Choose **Redrive**.

1. In the **Redrive** dialog box, choose **Redrive execution**.
**Tip**  
If you're on the *Execution Details* page of a failed execution, do one of the following to redrive the execution:  
Choose **Recover**, and then select **Redrive from failure**.
Choose **Actions**, and then select **Redrive**.

   Notice that redrive uses the same state machine definition and ARN. It continues running the execution from the step that failed in the original execution attempt. In this example, it's the **Do square number** step and **Wait 3 sec** branch inside the **Parallel** state. After restarting the execution of these unsuccessful steps in the **Parallel** state, redrive will continue execution for the **Done** step.

1. Choose the execution to open the *Execution Details* page.

   On this page, you can view the results of the redriven execution. For example, in the [Execution summary](concepts-view-execution-details.md#exec-details-intf-exec-summ) section, you can see **Redrive count**, which represents the number of times an execution has been redriven. In the **Events** section, you can see the redrive related execution events appended to the events of the original execution attempt. For example, the `ExecutionRedriven` event.

## Redriving executions using API


You can redrive [eligible](#redrive-eligibility) executions using the [RedriveExecution](https://docs.aws.amazon.com/step-functions/latest/apireference/API_RedriveExecution.html) API. This API restarts unsuccessful executions of Standard Workflows from the step that failed, aborted, or timed out.

In the AWS Command Line Interface (AWS CLI), run the following command to redrive an unsuccessful state machine execution. Remember to replace the *italicized* text with your resource-specific information.

```
aws stepfunctions redrive-execution --execution-arn arn:aws:states:us-east-2:account-id:execution:myStateMachine:foo
```

## Examining redriven executions


You can examine a redriven execution in the console or using the APIs: [GetExecutionHistory](https://docs.aws.amazon.com/step-functions/latest/apireference/API_GetExecutionHistory.html) and [DescribeExecution](https://docs.aws.amazon.com/step-functions/latest/apireference/API_DescribeExecution.html).

**Examine redriven executions on console**

1. Open the [Step Functions console](https://console.aws.amazon.com/states/home?region=us-east-1#/), and then choose an existing state machine for which you've redriven an execution.

1. Open the *Execution Details* page.

   On this page, you can view the results of the redriven execution. For example, in the [Execution summary](concepts-view-execution-details.md#exec-details-intf-exec-summ) section, you can see **Redrive count**, which represents the number of times an execution has been redriven. In the **Events** section, you can see the redrive related execution events appended to the events of the original execution attempt. For example, the `ExecutionRedriven` event.

**Examine redriven executions using APIs**  
If you've redriven a state machine execution, you can use one of the following APIs to view details about the redriven execution. Remember to replace the *italicized* text with your resource-specific information.
+ GetExecutionHistory – Returns the history of the specified execution as a list of events. This API also returns the details about the redrive attempt of an execution, if available.

  In the AWS CLI, run the following command.

  ```
  aws stepfunctions get-execution-history --execution-arn arn:aws:states:us-east-2:account-id:execution:myStateMachine:foo
  ```
+ DescribeExecution – Provides information about a state machine execution. This can be the state machine associated with the execution, the execution input and output, execution redrive details, if available, and relevant execution metadata.

  In the AWS CLI, run the following command.

  ```
  aws stepfunctions describe-execution --execution-arn arn:aws:states:us-east-2:account-id:execution:myStateMachine:foo
  ```

## Retry behavior of redriven executions


If your redriven execution reruns a [Task workflow state](state-task.md), [Parallel workflow state](state-parallel.md), or [Inline Map state](state-map-inline.md), for which you have defined [retries](concepts-error-handling.md#error-handling-retrying-after-an-error), the retry attempt count for these states is reset to 0 to allow for the maximum number of attempts on redrive. For a redriven execution, you can track individual retry attempts of these states using the console.

**To examine the individual retry attempts in the console**

1. On the *Execution Details* page of the [Step Functions console](https://console.aws.amazon.com/states/home?region=us-east-1#/), choose a state that was retried on redrive.

1. Choose the **Retries & redrives** tab.

1. Choose the arrow icon next to each retry attempt to view its details. If the retry attempt succeeded, you can view the results in **Output** that appears in a dropdown box.

The following image shows an example of the retries performed for a state in the original execution attempt and the redrives of that execution. In this image, three retries are performed in the original and redrive execution attempts. The execution succeeds in the fourth redrive attempt and returns an output of 16.

![\[Illustrative screenshot showing three failed retries and success on a fourth retry.\]](http://docs.aws.amazon.com/step-functions/latest/dg/images/task-retry-redrive.png)


# Viewing a Distributed Map Run execution in Step Functions
Viewing Map Runs

The Step Functions console provides a *Map Run Details* page which displays all the information related to a *Distributed Map state* execution. For example, you can view the status of the *Distributed Map state*'s execution, the Map Run's ARN, and the statuses of the items processed in the child workflow executions started by the *Distributed Map state*. You can also view a list of all child workflow executions and access their details. If your Map Run was [redriven](redrive-map-run.md), you will see redrive details in the Map Run execution summary too.

When you run a `Map` state in Distributed mode, Step Functions creates a Map Run resource. A Map Run refers to a set of child workflow executions that a *Distributed Map state* starts, and the runtime settings that control these executions. Step Functions assigns an Amazon Resource Name (ARN) to your Map Run. You can examine a Map Run in the Step Functions console. You can also invoke the `[DescribeMapRun](https://docs.aws.amazon.com/step-functions/latest/apireference/API_DescribeMapRun.html)` API action.

Child workflow executions of a Map Run emit metrics to CloudWatch;. These metrics will have a labelled State Machine ARN with the following format:

 `arn:partition:states:region:account:stateMachine:stateMachineName/MapRunLabel or UUID` 

The *Map Run Details* has three sections: *Map Run execution summary*, *Item processing status*, and *Listing executions*.

## Map Run execution summary


The *Map Run Execution summary* provides an overview of the execution details of the *Distributed Map state*.

**Details**  
Shows execution status of the *Distributed Map state*, the Map Run ARN, and type of the child workflow executions started by the *Distributed Map state*. You can view additional configurations, such as tolerated failure threshold for the Map Run and the maximum concurrency specified for child workflow executions.

**Input and output**  
Shows the input received by the *Distributed Map state* and the corresponding output that it generates.   
You can view the input dataset and its location, and the input filters applied to the individual data items in that dataset. If you export the output of the *Distributed Map state* execution, this tab shows the path to the Amazon S3 bucket that contains the execution results. Otherwise, it points you to the parent workflow's *Execution Details* page to view the execution output.

## Error message


If your Map Run failed, the *Map Run Details* page displays an error message with the reason for failure. 

From the **Recover** dropdown button on this error message, you can either redrive the unsuccessful child workflow executions started by this Map Run or start a new execution of the parent workflow. 

See [Redriving Map Runs](redrive-map-run.md) to learn how to restart your workflow.

## Item processing status


The **Item processing status** section displays the status of the items processed in a Map Run. For example, **Pending** indicates that a child workflow execution hasn’t started processing the item yet. 

Item statuses are dependent on the status of the child workflow executions processing the items. If a child workflow execution failed, times out, or if a user cancels the execution, Step Functions doesn't receive any information about the processing result of the items inside that child workflow execution. All items processed by that execution share the child workflow execution's status.

For example, say that you want to process 100 items in two child workflow executions, where each execution processes a batch of 50 items. If one of the executions fails and the other succeeds, you'll have 50 successful and 50 failed items.

The following table explains the types of processing statuses available for all items:


| Status | Description | 
| --- | --- | 
|  **Pending**  |  Indicates an item that the child workflow execution hasn't started processing. If a Map Run stops, fails, or a user cancels the execution before processing of an item starts, the item remains in **Pending** status. For example, if a Map Run fails with 10 unprocessed items, these 10 items remain in the **Pending** status.  | 
|  **Running**  |  Indicates an item currently being processed by the child workflow execution.  | 
|  **Succeeded**  |  Indicates that the child workflow execution successfully processed the item. A successful child workflow execution can't have any failed items. If one item in the dataset fails during execution, the entire child workflow execution fails.  | 
|  **Failed**  |  Indicates that the child workflow execution either failed to process the item, or the execution timed out. If any one item processed by a child workflow execution fails, the entire child workflow execution fails. For example, consider a child workflow execution that processed 1000 items. If any one item in that dataset fails during execution, then Step Functions considers the entire child workflow execution as failed. When you [redrive](redrive-map-run.md) a Map Run, the count of items with this status is reset to 0.  | 
|  **Aborted**  |  Indicates that the child workflow execution started processing the item, but either the user cancelled the execution, or Step Functions stopped the execution because the Map Run failed. For example, consider a **Running** child workflow execution that's processing 50 items. If the Map Run stops because of a failure or because a user cancelled the execution, the child workflow execution and the status of all 50 items changes to **Aborted**. If you use a child workflow execution of the **Express** type, you can't stop the execution. When you [redrive](redrive-map-run.md) a Map Run that starts child workflow executions of type Express, the count of items with this status is reset to 0. This is because Express child workflows are restarted using the [StartExecution](https://docs.aws.amazon.com/step-functions/latest/apireference/API_StartExecution.html) API action instead of being redriven.  | 

## Listing executions


The **Executions** section lists all of the child workflow executions for a specific Map Run. Use the **Search by exact execution name** field to search for a specific child workflow execution. To see details about a specific execution, select a child workflow execution from the list and choose the **View details** button to open its [*Execution details*](concepts-view-execution-details.md) page.

You can also use the API or AWS CLI to list child workflow executions started by the Map Run:
+ Using the API, call [ListExecutions](https://docs.aws.amazon.com/step-functions/latest/apireference/API_ListExecutions.html) with the `mapRunArn` parameter set to the ARN of the parent workflow.
+ Using the AWS CLI, call [list-executions](https://docs.aws.amazon.com/cli/latest/reference/stepfunctions/list-executions.html) with the `map-run-arn` parameter set to the ARN of the parent workflow.

**Important**  
The retention policy for child workflow executions is 90 days.  
 Completed child workflow executions that are older will not be displayed in the **Executions** table, even if the *Distributed Map state* or parent workflow continues to run longer than the retention period. You can view execution details, including results, of these child workflow executions if you export the *Distributed Map state* output to an Amazon S3 bucket using `ResultWriter (Map)`.

**Tip**  
Choose the refresh button to view the most current list of all child workflow executions.

# Redriving Map Runs in Step Functions executions
Redriving Map Runs

You can restart unsuccessful child workflow executions in a Map Run by [redriving](redrive-executions.md) your [parent workflow](state-map-distributed.md#dist-map-orchestrate-parallel-workloads-key-terms). A redriven parent workflow redrives all the unsuccessful states, including Distributed Map. A parent workflow redrives unsuccessful states if there's no `<stateType>Exited` event corresponding to the `<stateType>Entered` event for a state when the parent workflow completed its execution. For example, if the event history doesn't contain the `MapStateExited` event for a `MapStateEntered` event, you can redrive the parent workflow to redrive all the unsuccessful child workflow executions in the Map Run.

A Map Run is either not started or fails in the original execution attempt when the state machine doesn't have the required permission to access the [ItemReader (Map)](input-output-itemreader.md), [ResultWriter (Map)](input-output-resultwriter.md), or both. If the Map Run wasn't started in the original execution attempt of the parent workflow, redriving the parent workflow starts the Map Run for the first time. To resolve this, add the required permissions to your state machine role, and then redrive the parent workflow. If you redrive the parent workflow without adding the required permissions, it attempts to start a new Map Run run that will fail again. For information about the permissions that you might need, see [IAM policies for using Distributed Map states](iam-policies-eg-dist-map.md).

**Contents**
+ [

## Redrive eligibility for child workflows in a Map Run
](#redrive-eligibility-map-run)
+ [

## Child workflow execution redrive behavior
](#redrive-child-workflow-behavior)
+ [

## Scenarios of input used on Map Run redrive
](#maprun-redrive-input)
+ [

## IAM permission to redrive a Map Run
](#maprun-iam-permission)
+ [

## Redriving Map Run in console
](#redrive-maprun-console)
+ [

## Redriving Map Run using API
](#redrive-maprun-api)

## Redrive eligibility for child workflows in a Map Run


You can redrive the unsuccessful child workflow executions in a Map Run if the following conditions are met:
+ You started the parent workflow execution on or after November 15, 2023. Executions that you started prior to this date aren't eligible for redrive.
+ You haven't exceeded the hard limit of 1000 redrives of a given Map Run. If you've exceeded this limit, you'll receive the `States.Runtime` error.
+ The parent workflow is redrivable. If the parent workflow isn't redrivable, you can't redrive the child workflow executions in a Map Run. For more information about redrive eligibility of a workflow, see [Redrive eligibility for unsuccessful executions](redrive-executions.md#redrive-eligibility).
+ The child workflow executions of type Standard in your Map Run haven't exceeded the 25,000 execution event history limit. Child workflow executions that have exceeded the event history limit are counted towards the [tolerated failure threshold](state-map-distributed.md#maprun-fail-threshold) and considered as failed. For more information about the redrive eligibility of an execution, see [Redrive eligibility for unsuccessful executions](redrive-executions.md#redrive-eligibility).

A new Map Run is started and the existing Map Run isn't redriven in the following cases even if the Map Run failed in the original execution attempt:
+ Map Run failed because of the `States.DataLimitExceeded` error.
+ Map Run failed because of the JSON data interpolation error, `States.Runtime`. For example, you selected a non-existent JSON node in [Filtering state output using OutputPath](input-output-example.md#input-output-outputpath).

A Map Run can continue to run even after the parent workflow stops or times out. In these scenarios, the redrive doesn't happen immediately:
+ Map Run might still be canceling in progress child workflow executions of type Standard, or waiting for child workflow executions of type Express to complete their executions.
+ Map Run might still be writing results to the [ResultWriter (Map)](input-output-resultwriter.md), if you configured it to export results.

In these cases, the running Map Run completes its operations before attempting to redrive.

## Child workflow execution redrive behavior


The redriven child workflow executions in a Map Run exhibit the behavior as described in the following table.


| Express child workflow | Standard child workflow | 
| --- | --- | 
| All child workflow executions that failed or timed out in the original execution attempt are started using the [StartExecution](https://docs.aws.amazon.com/step-functions/latest/apireference/API_StartExecution.html) API action. The first state in [ItemProcessor](state-map-distributed.md#distitemprocessor) is run first. | All child workflow executions that failed, timed out, or canceled in the original execution attempt are redriven using the [https://docs.aws.amazon.com/step-functions/latest/apireference/API_RedriveExecution.html](https://docs.aws.amazon.com/step-functions/latest/apireference/API_RedriveExecution.html) API action. These child workflows are redriven from the last state in ItemProcessor that resulted in their unsuccessful execution. | 
|  Unsuccessful executions can always be redriven. This is because Express child workflow executions are always started as a new execution using the StartExecution API action.  | Unsuccessful Standard child workflow executions can't always be redriven. If an execution isn't redrivable, it won't be attempted again. The last error or output of the execution is permanent. This is possible when an execution exceeds 25,000 history events, or its redrivable period of 14 days has expired. A Standard child workflow execution might not be redrivable if the parent workflow execution has closed within 14 days, but the child workflow execution closed earlier than 14 days. | 
| Express child workflow executions use the same execution ARN as the original execution attempt, but you can't distinctly identify their individual redrives. | Standard child workflow executions use the same execution ARN as the original execution attempt. You can distinctly identify the individual redrives in the console and using APIs, such as [GetExecutionHistory](https://docs.aws.amazon.com/step-functions/latest/apireference/API_GetExecutionHistory.html) and [DescribeExecution](https://docs.aws.amazon.com/step-functions/latest/apireference/API_DescribeExecution.html). For more information, see [Examining redriven executions](redrive-executions.md#examine-redriven-executions). | 

If you've redriven a Map Run, and it has reached its concurrency limit, the child workflow executions in that Map Run transition to the pending state. The execution status of the Map Run also transitions to the **Pending redrive** state. Until the specified concurrency limit can allow for more child workflow executions to run, the execution remains in the **Pending redrive** state.

For example, say that the concurrency limit of the Distributed Map in your workflow is 3000, and the number of child workflows to be rerun is 6000. This causes 3000 child workflows to run in parallel while the remaining 3000 workflows remain in the **Pending redrive** state. After the first batch of 3000 child workflows complete their execution, the remaining 3000 child workflows are run.

When a Map Run has completed its execution or is aborted, the count of child workflow executions in the **Pending redrive** state is reset to 0.

## Scenarios of input used on Map Run redrive


Depending on how you provided input to the Distributed Map in the original execution attempt, a redriven Map Run will use the input as described in the following table.


| Input in the original execution attempt | Input used on Map Run redrive | 
| --- | --- | 
| Input passed from a previous state or the execution input. | The redriven Map Run uses the same input. | 
| Input passed using [ItemReader (Map)](input-output-itemreader.md) and the Map Run didn't start the child workflow executions because one of the following conditions is true: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/step-functions/latest/dg/redrive-map-run.html)  | The redriven Map Run uses the input in the Amazon S3 bucket. | 
| Input passed using ItemReader. The Map Run failed after starting or attempting to start child workflow executions. | The redriven Map Run uses the same input provided in the original execution attempt. | 

## IAM permission to redrive a Map Run


Step Functions needs appropriate permission to redrive a Map Run. The following IAM policy example grants the least privilege required to your state machine for redriving a Map Run. Remember to replace the *italicized* text with your resource-specific information.

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "states:RedriveExecution"
      ],
      "Resource": "arn:aws:states:us-east-2:123456789012:execution:myStateMachineName/myMapRunLabel:*"
    }
  ]
}
```

## Redriving Map Run in console


The following image shows the execution graph of a state machine that contains a Distributed Map. This execution failed because the Map Run failed. To redrive the Map Run, you must redrive the parent workflow.

![\[Graph of a failed state machine execution caused by a failed Map Run.\]](http://docs.aws.amazon.com/step-functions/latest/dg/images/redrive-eg-failed-maprun.png)


**To redrive a Map Run from the console**

1. Open the [Step Functions console](https://console.aws.amazon.com/states/home?region=us-east-1#/), and then choose an existing state machine that contains a Distributed Map that failed execution.

1. On the state machine detail page, under **Executions**, choose a failed execution instance of this state machine.

1. Choose **Redrive**.

1. In the **Redrive** dialog box, choose **Redrive execution**.
**Tip**  
You can also redrive a Map Run from the *Execution Details* or *Map Run Details* page.  
If you're on the *Execution Details* page, do one of the following to redrive the execution:  
Choose **Recover**, and then select **Redrive from failure**.
Choose **Actions**, and then select **Redrive**.
If you're on the *Map Run Details* page, choose **Recover**, and then select **Redrive from failure**.

   Notice that redrive uses the same state machine definition and ARN. It continues running the execution from the step that failed in the original execution attempt. In this example, it's the Distributed Map step named **Map** and the **Process input** step inside it. After restarting the unsuccessful child workflow executions of the Map Run, redrive will continue execution for the **Done** step.

1. From the *Execution Details* page, choose **Map Run** to see the details of the redriven Map Run.

   On this page, you can view the results of the redriven execution. For example, in the [Map Run execution summary](concepts-examine-map-run.md#map-run-exec-summary) section, you can see **Redrive count**, which represents the number of times the Map Run has been redriven. In the **Events** section, you can see the redrive related execution events appended to the events of the original execution attempt. For example, the `MapRunRedriven` event.

After you've redriven a Map Run, you can examine its redrive details in the console or using the [GetExecutionHistory](https://docs.aws.amazon.com/step-functions/latest/apireference/API_GetExecutionHistory.html) and [DescribeExecution](https://docs.aws.amazon.com/step-functions/latest/apireference/API_DescribeExecution.html) API actions. For more information about examining a redriven execution, see [Examining redriven executions](redrive-executions.md#examine-redriven-executions).

## Redriving Map Run using API


You can redrive an [eligible](#redrive-eligibility-map-run) Map Run using the [https://docs.aws.amazon.com/step-functions/latest/apireference/API_RedriveExecution.html](https://docs.aws.amazon.com/step-functions/latest/apireference/API_RedriveExecution.html) API on the parent workflow. This API restarts unsuccessful child workflow executions in a Map Run.

In the AWS Command Line Interface (AWS CLI), run the following command to redrive an unsuccessful state machine execution. Remember to replace the *italicized* text with your resource-specific information.

```
aws stepfunctions redrive-execution --execution-arn arn:aws:states:us-east-2:account-id:execution:myStateMachine:foo
```

After you have redriven a Map Run, you can examine its redrive details in the console or using the [DescribeMapRun](https://docs.aws.amazon.com/step-functions/latest/apireference/API_DescribeMapRun.html) API action. To examine the redrive details of Standard workflow executions in a Map Run, you can use the [GetExecutionHistory](https://docs.aws.amazon.com/step-functions/latest/apireference/API_GetExecutionHistory.html) or [DescribeExecution](https://docs.aws.amazon.com/step-functions/latest/apireference/API_DescribeExecution.html) API action. For more information about examining a redriven execution, see [Examining redriven executions](redrive-executions.md#examine-redriven-executions).

You can examine the redrive details of Express workflow executions in a Map Run on the [Step Functions console](https://console.aws.amazon.com/states/home?region=us-east-1#/) if you've enabled logging on the parent workflow. For more information, see [Using CloudWatch Logs to log execution history in Step Functions](cw-logs.md).