

# Explore test results
<a name="explore-test-results"></a>

Once the parsing job completes, test results are available for analysis. The solution provides comprehensive metrics and tools to help you understand your application’s performance under load.

## Test run summary metrics
<a name="test-run-summary"></a>

When a test completes, the solution generates a summary that includes the following metrics:
+  **Average response time** - The average response time, in seconds, for all requests generated by the test.
+  **Average latency** - The average latency, in seconds, for all requests generated by the test.
+  **Average connection time** - The average time, in seconds, it takes to connect to the host for all requests.
+  **Average bandwidth** - The average bandwidth for all requests generated by the test.
+  **Total count** - The total number of requests.
+  **Success count** - The total number of successful requests.
+  **Error count** - The total number of errors.
+  **Requests per second** - The average requests per second for all requests generated by the test.
+  **Percentiles** - Response time percentiles including p50 (median), p90, p95, and p99, showing the distribution of response times across all requests.

## Test runs table
<a name="test-runs-table"></a>

![\[Test Run Summary Table across all historic Test Runs\]](http://docs.aws.amazon.com/solutions/latest/distributed-load-testing-on-aws/images/test-run-results.png)


The test runs table displays all historical test runs for a scenario. You can:
+ View summary metrics for each test run.
+ Set a baseline test run for performance comparison.
+ Download the table as a CSV file.
+ Toggle columns to customize your view.
+ Select a test run to view detailed results.

## Baseline comparison
<a name="baseline-comparison"></a>

You can designate a test run as a baseline to compare future test runs against it. When a baseline is set:
+ The test runs table shows percentage differences (\$1/-%) compared to the baseline for each metric.
+ The baseline indicator helps you quickly identify performance improvements or regressions.
+ You can change or clear the baseline at any time.

## Detailed test results
<a name="detailed-test-results"></a>

Selecting a test run opens the detailed results view with three tabs: Test Run Results, Errors, and Artifacts.

![\[Detailed test run results showing baseline comparison and metrics dashboard\]](http://docs.aws.amazon.com/solutions/latest/distributed-load-testing-on-aws/images/test-run-detailed-view.png)


 **Baseline information** 

If a baseline test run is set, it displays at the top of the page. You can choose **Show Actual**, **Show Percentage**, or **Remove Baseline** to control how baseline comparisons are displayed.

 **Test Run Results table** 

The results table provides detailed metrics with the following features:

 **Dimension views**   
Toggle between three views using the dimension buttons:  
+  **Overall** - Aggregated results across all endpoints and regions
+  **By Endpoint** - Results broken down by individual endpoints
+  **By Region** - Results broken down by AWS region

 **Action buttons**   
+  **Show Actual** - Display actual metric values
+  **Show Percentage** - Display percentage differences from baseline
+  **Remove Baseline** - Clear the baseline comparison

 **Data export and customization** 
+ Download the results table as a CSV file
+ Toggle columns to customize your view
+ Filter and sort data to focus on specific metrics
+ Filter and sort data to focus on specific metrics.

## Errors tab
<a name="errors-tab"></a>

The errors tab provides detailed error analysis:
+ View error counts by type.
+ See errors aggregated by overall test or by endpoint.
+ Identify patterns in failed requests.
+ Troubleshoot issues with specific endpoints or regions.

## Artifacts tab
<a name="artifacts-tab"></a>

The artifacts tab allows you to access all files generated during the test run:
+ View individual artifacts (logs, results files).
+ Download specific artifacts for offline analysis.
+ Download all test run artifacts as a single archive.

## S3 results structure
<a name="s3-results-structure"></a>

In version 4.0, the S3 results structure has changed to improve organization:
+  **New structure** - `scenario-id/test-run-id/results-files`.
+  **Legacy structure** - Tests run before version 4.0 show all result files at the scenario ID level.

**Note**  
Test results are displayed in the console. You can also access the raw test results directly in the Amazon S3 bucket under the `Results` folder. For more information on Taurus test results, see [Generating Test Reports](https://gettaurus.org/docs/Reporting/) in the *Taurus User Manual*.