Run a test scenario - Distributed Load Testing on AWS

Run a test scenario

After creating a test scenario, you can run it immediately or schedule it to run at a specific time in the future. When you navigate to a running test, the console displays the Scenario Details tab with real-time task status and metrics.

Running test scenario showing task status and real-time metrics

Scenario details view

The Scenario Details tab displays key information about your test. The task staus table real-time information for every region.

Task status table

The Task Status table shows real-time information for each region:

  • Region - The AWS region where tasks are running

  • Task Counts - The total number of tasks configured for the region

  • Concurrency - The number of virtual users per task

  • Running - Number of tasks currently executing the test

  • Pending - Number of tasks waiting to start

  • Provisioning - Number of tasks being provisioned

Test execution workflow

When a test starts, the following workflow occurs:

  1. Task provisioning - The solution provisions containers (tasks) in the specified AWS regions. Tasks appear in the "Provisioning" column.

  2. Task startup - The solution continues to provision tasks until the target task count is reached in each region. Tasks move from "Provisioning" to "Pending" to "Running".

  3. Traffic generation - After the solution provisions all tasks in a region, they begin sending traffic to your target endpoint.

  4. Test execution - The test runs for the configured duration (ramp-up + hold time).

  5. Results parsing - When the test ends, a background parsing job aggregates and processes results from all regions.

Test run statuses

Test runs can have the following statuses:

  • Scheduled - The test is scheduled to run in the future.

  • Running - The test is currently in progress.

  • Cancelled - A user cancelled an in-progress test run.

  • Errored - The test run encountered an error.

  • Complete - The test run completed successfully and results are ready.

Monitoring with live data

If you enabled live data when creating the test scenario, you can view real-time metrics while the test is running. The Real Time Metrics section displays four graphs that update continuously as the test progresses, with data aggregated at one-second intervals.

Real Time Metrics graphs showing live test performance data

Graph descriptions

Average Response Time

Displays the average response time in seconds for requests processed by each region. The Y-axis shows response time in seconds, and the X-axis shows the time of day. Each region is represented by a different color in the legend.

Virtual Users

Shows the number of concurrent virtual users actively generating load in each region. The graph displays how virtual users ramp up during the test and maintains the target concurrency level.

Successful Requests

Displays the cumulative count of successful requests over time for each region. The graph shows the rate at which successful requests are being processed.

Failed Requests

Shows the cumulative count of failed requests over time for each region. A low or zero count indicates healthy test execution.

Multi-region visualization

When running tests across multiple regions, each graph displays data for all regions simultaneously. The legend at the bottom of each graph identifies which color represents each region (for example, us-west-2 and us-east-1).

Technical implementation

The CloudWatch log group for the Fargate tasks contains a subscription filter that captures test results. When the pattern is detected, a Lambda function structures the data and publishes it to an AWS IoT Core topic. The web console subscribes to this topic and displays the metrics in real-time.

Note

Live data is ephemeral and only available while the test is running. The web console persists a maximum of 5,000 data points, after which the oldest data is replaced with the newest. If the page refreshes, the graphs will be blank and start from the next available data point. Once a test is complete, the solution stores the results data in DynamoDB and Amazon S3. If no data is available yet, the graphs display "There is no data available."

Cancelling a test

You can cancel a running test from the web console. When you cancel a test, the following workflow occurs:

  1. The cancellation request is sent to the microservices API

  2. The microservices API calls the task-canceler Lambda function which stops all currently launched tasks

  3. If the task-runner Lambda function continues to run after the initial cancellation call, tasks may continue to launch briefly

  4. Once the task-runner Lambda function finishes, AWS Step Functions proceeds to the Cancel Test step, which runs the task-canceler Lambda function again to stop any remaining tasks

Note

Cancelled tests take time to complete the shutdown process as the solution terminates all containers. The test status will change to "Cancelled" once all resources are cleaned up.