

# Observe your agent applications on Amazon Bedrock AgentCore Observability
AgentCore Observability: Observe your agents and resources

With AgentCore, you can trace, debug, and monitor AI agents' performance in production environments.

AgentCore Observability helps you trace, debug, and monitor agent performance in production environments. It offers detailed visualizations of each step in the agent workflow, enabling you to inspect an agent’s execution path, audit intermediate outputs, and debug performance bottlenecks and failures.

AgentCore Observability gives you real-time visibility into agent operational performance through access to dashboards powered by Amazon CloudWatch and telemetry for key metrics such as session count, latency, duration, token usage, and error rates. Rich metadata tagging and filtering simplify issue investigation and quality maintenance at scale. AgentCore emits telemetry data in standardized OpenTelemetry (OTEL)-compatible format, enabling you to easily integrate it with your existing monitoring and observability stack.

By default, AgentCore outputs a set of key built-in metrics for agents, gateway resources, and memory resources. For memory resources, AgentCore also outputs spans and log data if you enable it. You can also instrument your agent code to provide additional span and trace data and custom metrics and logs. See [Add observability to your Amazon Bedrock AgentCore resources](observability-configure.md) to learn more.

All of the metrics, spans, and logs output by AgentCore are stored in Amazon CloudWatch, and can be viewed in the CloudWatch console or downloaded from CloudWatch using the AWS CLI or one of the AWS SDKs.

In addition to the raw data stored in CloudWatch Logs, for agent runtime data only, the CloudWatch console provides an observability dashboard containing trace visualizations, graphs for custom span metrics, error breakdowns, and more. To learn more about viewing your agents' observability data, see [View observability data for your Amazon Bedrock AgentCore agents](observability-view.md) 

**Topics**
+ [

# Get started with AgentCore Observability
](observability-get-started.md)
+ [

# Add observability to your Amazon Bedrock AgentCore resources
](observability-configure.md)
+ [

# Understand observability for agentic resources in AgentCore
](observability-telemetry.md)
+ [

# Amazon Bedrock AgentCore generated observability data
](observability-service-provided.md)
+ [

# View observability data for your Amazon Bedrock AgentCore agents
](observability-view.md)
+ [

# Monitor AgentCore resources across accounts
](observability-cross-account.md)

# Get started with AgentCore Observability
Get started with AgentCore Observability

Amazon Bedrock Amazon Bedrock AgentCore Observability helps you trace, debug, and monitor agent performance in production environments. This guide helps you implement observability features in your agent applications.

**Topics**
+ [

## Prerequisites
](#prerequisites)
+ [

## Step 1: Enable transaction search on CloudWatch
](#enabling-transaction-search)
+ [

## Step 2: Enable observability for Amazon Bedrock AgentCore Runtime hosted agents
](#enabling-observability-runtime-hosted)
+ [

## Step 3: Enable observability for non-Amazon Bedrock AgentCore-hosted agents
](#enabling-observability-non-runtime-hosted)
+ [

## Step 4: Observe your agent with GenAI observability on Amazon CloudWatch
](#agentcore-observability-genai-cloudwatch)
+ [

## Best practices
](#best-practices)

## Prerequisites


Before starting, make sure you have:
+  ** AWS Account** with credentials configured ( `aws configure` ) with model access enabled to the Foundation Model you would like to use.
+  **Python 3.10\$1** installed
+  **Enable transaction search** on Amazon CloudWatch. Only once, first-time users must enable [CloudWatch Transaction Search](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Enable-TransactionSearch.html) to view Bedrock Amazon Bedrock AgentCore spans and traces
+  **(Non-runtime agents only) Add the OpenTelemetry library** – Include `aws-opentelemetry-distro` (ADOT) in your requirements.txt file. If you deploy using the AgentCore CLI, the runtime automatically instruments your agent and this step is not required.
+  **(Non-runtime agents only)** Make sure that your framework is configured to emit traces (for example, `strands-agents[otel]` package). You may sometimes need to include your agent framework’s auto-instrumentor (for example, `opentelemetry-instrumentation-langchain` ).

Amazon Bedrock AgentCore Observability offers two ways to configure monitoring to match different infrastructure needs:

1. Amazon Bedrock AgentCore Runtime-hosted agents

1. Non-runtime hosted agents

As a one time setup per AWS account, first time users need to enable Transaction Search on Amazon CloudWatch. There are two ways to do this, via the API and via the CloudWatch Console.

## Step 1: Enable transaction search on CloudWatch


After you enable Transaction Search, it can take ten minutes for spans to become available for search and analysis. Choose one of the options below:

### Option 1: Enable transaction search using an API


 **To enable transaction search using the API** 

1. Create a policy that grants access to ingest spans in CloudWatch Logs using AWS CLI.

   An example is shown below on how to format your AWS CLI command with `PutResourcePolicy`.

   ```
   aws logs put-resource-policy --policy-name MyResourcePolicy --policy-document '{ "Version": "2012-10-17", "Statement": [ { "Sid": "TransactionSearchXRayAccess", "Effect": "Allow", "Principal": { "Service": "xray.amazonaws.com" }, "Action": "logs:PutLogEvents", "Resource": [ "arn:partition:logs:region:account-id:log-group:aws/spans:*", "arn:partition:logs:region:account-id:log-group:/aws/application-signals/data:*" ], "Condition": { "ArnLike": { "aws:SourceArn": "arn:partition:xray:region:account-id:*" }, "StringEquals": { "aws:SourceAccount": "account-id" } } } ]}'
   ```

1. Configure the destination of trace segments.

   An example is shown below on how to format your AWS CLI command with `UpdateTraceSegmentDestination`.

   ```
   aws xray update-trace-segment-destination --destination CloudWatchLogs
   ```

1.  **Optional** Configure the amount of spans to index.

   Configure your desired sampling percentage with `UpdateIndexingRule`.

   ```
   aws xray update-indexing-rule --name "Default" --rule '{"Probabilistic": {"DesiredSamplingPercentage": number}}'
   ```

### Option 2: Enable transaction search in the CloudWatch console


 **To enable transaction search in the CloudWatch console** 

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane under **Setup** , choose **Settings**.

1. Select **Account** and choose **X-Ray traces** tab.

1. In the **Transaction Search** section, choose **View settings**.

1. On the page that opens, choose **Edit**.

1. Choose **Enable Transaction Search**.

1. Select **For X-Ray users** and enter the percentage of traces to index. You can index 1% of traces at no cost and adjust this percentage later based on your needs.

1. Choose **Save** . Wait till **Ingest OpenTelemetry spans** shows **Enabled** before sending traces.

Let’s now proceed to exploring the two ways to configure observability.

## Step 2: Enable observability for Amazon Bedrock AgentCore Runtime hosted agents


Amazon Bedrock AgentCore Runtime-hosted agents are deployed and executed directly within the Amazon Bedrock AgentCore environment, providing automatic instrumentation with minimal configuration. When you deploy an agent using the AgentCore CLI, the runtime automatically instruments your agent with OpenTelemetry — no additional OTEL libraries or configuration are needed.

For a complete example, refer to this [notebook](https://github.com/awslabs/amazon-bedrock-agentcore-samples/blob/main/01-tutorials/06-AgentCore-observability/01-Agentcore-runtime-hosted/Strands%20Agents/runtime_with_strands_and_bedrock_models.ipynb) 

### Create your agent project


Create a new project using the AgentCore CLI. This sets up your project folder, virtual environment, and dependencies:

```
npm install -g @aws/agentcore
agentcore create --name StrandsClaudeGettingStarted
```

In the project’s agent directory, replace the default agent code with your own agent logic. The following is an example using the Strands Agents SDK:

```
## app/StrandsClaudeGettingStarted/main.py
from strands import Agent, tool
from strands_tools import calculator
from bedrock_agentcore.runtime import BedrockAgentCoreApp
from strands.models import BedrockModel

app = BedrockAgentCoreApp()

@tool
def weather():
    """Get weather"""
    return "sunny"

model = BedrockModel(
    model_id="us.anthropic.claude-3-7-sonnet-20250219-v1:0",
)
agent = Agent(
    model=model,
    tools=[calculator, weather],
    system_prompt="You're a helpful assistant. You can do simple math calculation, and tell the weather."
)

@app.entrypoint
def strands_agent_bedrock(payload):
    """Invoke the agent with a payload"""
    user_input = payload.get("prompt")
    response = agent(user_input)
    return response.message['content'][0]['text']

if __name__ == "__main__":
    app.run()
```

### Deploy and invoke your agent


Deploy the agent to AgentCore Runtime. The AgentCore CLI handles packaging, deployment, and automatic OTEL instrumentation:

```
cd StrandsClaudeGettingStarted
agentcore deploy
```

After deployment, your agent runs on AgentCore Runtime and is automatically instrumented using OpenTelemetry. Invoke your agent and view the traces, sessions, and metrics on the GenAI Observability dashboard in Amazon CloudWatch:

```
agentcore invoke
```

Alternatively, you can invoke your agent programmatically using the AWS SDK:

```
import boto3, json

client = boto3.client('bedrock-agentcore')

response = client.invoke_agent_runtime(
    agentRuntimeArn="YOUR_AGENT_RUNTIME_ARN",
    runtimeSessionId="my-observability-session-001",
    payload=json.dumps({"prompt": "What is 2 + 2?"}),
    qualifier="DEFAULT"
)

print(json.loads(response['response'].read()))
```

## Step 3: Enable observability for non-Amazon Bedrock AgentCore-hosted agents


For agents running outside of the Amazon Bedrock AgentCore runtime, you can deliver the same monitoring capabilities for agents deployed on your own infrastructure. This allows consistent observability regardless of where your agents run. Use the following steps to configure the environment variables needed to observe your agents.

For a complete example, refer to this [notebook](https://github.com/awslabs/amazon-bedrock-agentcore-samples/blob/main/01-tutorials/06-AgentCore-observability/02-Agent-not-hosted-on-runtime/Strands/Strands_Observability.ipynb) 

### Configure AWS environment variables


```
export AWS_ACCOUNT_ID=<account id>
export AWS_DEFAULT_REGION=<default region>
export AWS_REGION=<region>
export AWS_ACCESS_KEY_ID=<access key id>
export AWS_SECRET_ACCESS_KEY=<secret key>
```

### Configure CloudWatch logging


Create a log group and log stream for your agent in Amazon CloudWatch which you can use to configure below environment variables.

### Configure OpenTelemetry environment variables


```
export AGENT_OBSERVABILITY_ENABLED=true # Activates the ADOT pipeline
export OTEL_PYTHON_DISTRO=aws_distro # Uses AWS Distro for OpenTelemetry
export OTEL_PYTHON_CONFIGURATOR=aws_configurator # Sets AWS configurator for ADOT SDK
export OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf # Configures export protocol
export  OTEL_EXPORTER_OTLP_LOGS_HEADERS=x-aws-log-group=<YOUR-LOG-GROUP>,x-aws-log-stream=<YOUR-LOG-STREAM>,x-aws-metric-namespace=<YOUR-NAMESPACE>
# Directs logs to CloudWatch groups
export OTEL_RESOURCE_ATTRIBUTES=service.name=<YOUR-AGENT-NAME> # Identifies your agent in observability data
```

Replace *<YOUR-AGENT-NAME>* with a unique name to identify this agent in the GenAI Observability dashboard and logs.

### Create an agent locally


```
# Create agent.py -  Strands agent that is a weather assistant
from strands import Agent
from strands_tools import http_request

# Define a weather-focused system prompt
WEATHER_SYSTEM_PROMPT = """You are a weather assistant with HTTP capabilities. You can:

1. Make HTTP requests to the National Weather Service API
2. Process and display weather forecast data
3. Provide weather information for locations in the United States

When retrieving weather information:
1. First get the coordinates or grid information using https://api.weather.gov/points/{latitude},{longitude} or https://api.weather.gov/points/{zipcode}
2. Then use the returned forecast URL to get the actual forecast

When displaying responses:
- Format weather data in a human-readable way
- Highlight important information like temperature, precipitation, and alerts
- Handle errors appropriately
- Convert technical terms to user-friendly language

Always explain the weather conditions clearly and provide context for the forecast.
"""

# Create an agent with HTTP capabilities
weather_agent = Agent(
    system_prompt=WEATHER_SYSTEM_PROMPT,
    tools=[http_request],  # Explicitly enable http_request tool
)

response = weather_agent("What's the weather like in Seattle?")
print(response)
```

### Run your agent with automatic instrumentation command


With aws-opentelemetry-distro in your requirements.txt, the `opentelemetry-instrument` command will:
+ Load your OTEL configuration from your environment variables
+ Automatically instrument Strands, Amazon Bedrock calls, agent tool and databases, and other requests made by agent
+ Send traces to CloudWatch
+ Enable you to visualize the agent’s decision-making process in the GenAI Observability dashboard

```
opentelemetry-instrument python agent.py
```

You can now view your traces, sessions and metrics on GenAI Observability Dashboard on Amazon CloudWatch with the value of **YOUR-AGENT-NAME** that you configured in your [environment variables](#configure-opentelemetry-environment-variables).

To correlate traces across multiple agent runs, you can associate a session ID with your telemetry data using OpenTelemetry baggage:

```
from opentelemetry import baggage, context
ctx = baggage.set_baggage("session.id", session_id)
```

Run the session-enabled version following command, complete implementation provided in the [notebook](https://github.com/awslabs/amazon-bedrock-agentcore-samples/blob/main/01-tutorials/06-AgentCore-observability/02-Agent-not-hosted-on-runtime/Strands/Strands_Observability.ipynb) :

```
opentelemetry-instrument python strands_travel_agent_with_session.py --session-id "user-session-123"
```

## Step 4: Observe your agent with GenAI observability on Amazon CloudWatch


After implementing observability, you can view the collected data in CloudWatch:

### Observe your agent


1. Open the [GenAI Observability on CloudWatch console](https://console.aws.amazon.com/cloudwatch/home#gen-ai-observability) 

1. You can view the data related to model invocations and agents on Bedrock Amazon Bedrock AgentCore on the dashboard.

1. In the Bedrock Agentcore tab you can view Agents View, Sessions View and Traces View.

1. Agents View lists all your Agents that are on and not on runtime, you can also choose an agent and view further details like runtime metrics, sessions and traces specific to an agent.

1. In the **Sessions View** tab, you can navigate across all the sessions associated with agents.

1. In the **Trace View** tab, you can look into the traces and span information for agents. Also explore the trace trajectory and timeline by choosing a trace.

### View logs in CloudWatch


 **To view logs in CloudWatch** 

1. Open the [CloudWatch console](https://console.aws.amazon.com/cloudwatch/) 

1. In the left navigation pane, expand **Logs** and select **Log groups** 

1. Search for your agent’s log group:
   + Standard logs (stdout/stderr) Location: `/aws/bedrock-agentcore/runtimes/<agent_id>-<endpoint_name>/[runtime-logs] <UUID>` 
   + OTEL structured logs: `/aws/bedrock-agentcore/runtimes/<agent_id>-<endpoint_name>/runtime-logs` 

### View traces and spans


 **To view traces and spans** 

1. Open the [CloudWatch console](https://console.aws.amazon.com/cloudwatch/) 

1. Select **Transaction Search** from the left navigation

1. Location: `/aws/spans/default` 

1. Filter by service name or other criteria

1. Select a trace to view the detailed execution graph

### View metrics


 **To view metrics** 

1. Open the [CloudWatch console](https://console.aws.amazon.com/cloudwatch/) 

1. Select **Metrics** from the left navigation

1. Browse to the `bedrock-agentcore` namespace

1. Explore the available metrics

## Best practices


1.  **Start simple, then expand** - The default observability provided by Amazon Bedrock AgentCore captures most critical metrics automatically, including model calls, token usage, and tool execution.

1.  **Configure for development stage** - Tailor your observability configuration to match your current development phase and progressively adjust.

1.  **Use consistent naming** - Establish naming conventions for services, spans, and attributes from the start

1.  **Filter sensitive data** - Prevent exposure of confidential information by filtering sensitive data from observability attributes and payloads.

1.  **Set up alerts** - Configure CloudWatch alarms to notify you of potential issues before they impact users

# Add observability to your Amazon Bedrock AgentCore resources
Add observability to your agents

Amazon Bedrock AgentCore provides a number of built-in metrics to monitor the performance of resources for the AgentCore runtime, memory, gateway, built-in tools, and identity resource types. This default data is available in Amazon CloudWatch. To view the full range of observability data in the CloudWatch console, or to output custom runtime metrics for agents, you need to instrument your code using the AWS Distro for Open Telemetry (ADOT) SDK.

To view the observability dashboard in CloudWatch, open the [Amazon CloudWatch GenAi Observability](https://console.aws.amazon.com/cloudwatch/home#gen-ai-observability) page.

See the following sections to learn more about configuring your resources to view observability metrics in the CloudWatch console generative AI observability page and in CloudWatch Logs.

**Tip**  
Use of the ADOT SDK to output custom metrics is also supported for agents running outside the AgentCore runtime. To learn how to enable observability for these agents, see [Enabling observability for agents hosted outside of AgentCore](#observability-configure-3p).

**Topics**
+ [

## Enabling AgentCore observability
](#observability-configure-builtin)
+ [

## Enabling observability in agent code for AgentCore-hosted agents
](#observability-configure-custom)
+ [

## Enabling observability for agents hosted outside of AgentCore
](#observability-configure-3p)
+ [

## Enabling observability for AgentCore runtime, memory, gateway, built-in tools, and identity resources
](#observability-configure-cloudwatch)
+ [

## Enhanced AgentCore runtime observability with custom headers
](#observability-configure-invoke)
+ [

## Enhanced AgentCore built-in tools observability with custom headers
](#observability-configure-built-in)
+ [

## Enhanced AgentCore identity observability with custom headers
](#observability-configure-identity)
+ [

## Observability best practices
](#observability-configure-best-practice)
+ [

## Using other observability platforms
](#observability-configure-3p-platforms)

## Enabling AgentCore observability


To view metrics, spans, and traces generated by the AgentCore service, you first need to complete a one-time setup to turn on Amazon CloudWatch Transaction Search. To view service-provided spans for memory resources, you also need to enable tracing when you create a memory. See [Enabling observability for AgentCore runtime, memory, gateway, built-in tools, and identity resources](#observability-configure-cloudwatch) to learn more.

The following sections describe how to perform these setup actions and to enable observability in your agent code.

### Enabling CloudWatch Transaction Search


You can enable CloudWatch Transaction Search either by using the CloudWatch console, or by using an API through the AWS Command Line Interface (AWS CLI) or one of the AWS SDKs.

Use one of the following procedures to enable Transaction Search.

**Example**  

1. ====== To enable CloudWatch Transaction Search in the CloudWatch console

1. Open the [CloudWatch](https://console.aws.amazon.com/cloudwatch) console.

1. In the navigation pane, expand **Application Signals (APM)** and choose **Transaction search**.

1. Choose **Enable Transaction Search**.

1. Select the checkbox to ingest spans as structured logs.

1. Choose **Save**.

1. ====== To enable CloudWatch Transaction Search using an API

1. When using the AWS CLI or an AWS SDK to enable Transaction Search, first configure the necessary permissions to ingest spans in CloudWatch Logs by adding a resource-based policy with [PutResourcePolicy](https://docs.aws.amazon.com/xray/latest/api/API_PutResourcePolicy.html).

   The following AWS CLI command adds a resource policy that gives AWS X-Ray permissions to send traces to CloudWatch Logs.

   ```
   aws logs put-resource-policy --policy-name MyResourcePolicy --policy-document '{ "Version": "2012-10-17",		 	 	  "Statement": [ { "Sid": "TransactionSearchXRayAccess", "Effect": "Allow", "Principal": { "Service": "xray.amazonaws.com" }, "Action": "logs:PutLogEvents", "Resource": [ "arn:partition:logs:region:account-id:log-group:aws/spans:*", "arn:partition:logs:region:account-id:log-group:/aws/application-signals/data:*" ], "Condition": { "ArnLike": { "aws:SourceArn": "arn:partition:logs:region:account-id:*" }, "StringEquals": { "aws:SourceAccount": "account-id" } } } ]}'
   ```

   For clarity, the inline JSON policy in this command is shown expanded in the following example:

   ```
   {
   "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "TransactionSearchXRayAccess",
               "Effect": "Allow",
               "Principal": {
                   "Service": "xray.amazonaws.com"
               },
               "Action": "logs:PutLogEvents",
               "Resource": [
                   "arn:aws:logs:us-east-1:123456789012:log-group:aws/spans:*",
                   "arn:aws:logs:us-east-1:123456789012:log-group:/aws/application-signals/data:*"
               ],
               "Condition": {
                   "ArnLike": {
                       "aws:SourceArn": "arn:aws:xray:us-east-1:123456789012:*"
                   },
                   "StringEquals": {
                       "aws:SourceAccount": "123456789012"
                   }
               }
           }
       ]
   }
   ```

1. Configure the destination of your trace segments using [UpdateTraceSegmentDestination](https://docs.aws.amazon.com/xray/latest/api/API_UpdateTraceSegmentDestination.html).

   To use the AWS CLI, run the following command.

   ```
   aws xray update-trace-segment-destination --destination CloudWatchLogs
   ```

1. (Optional) Configure your desired sampling percentage using [UpdateIndexingRule](https://docs.aws.amazon.com/xray/latest/api/API_UpdateIndexingRule.html).

   To use the AWS CLI, run the following command.

   ```
   aws xray update-indexing-rule --name "Default" --rule '{"Probabilistic": {"DesiredSamplingPercentage": number}}'
   ```

## Enabling observability in agent code for AgentCore-hosted agents


In addition to the service-generated metrics, with AgentCore you can also gather span and trace data as well as custom metrics emitted from your agent code.

When you use agent frameworks like [Strands](https://strandsagents.com/latest/) , [LangChain](https://www.langchain.com/agents) , or [CrewAI](https://www.crewai.com/) with supported third-party instrumentation libraries, the framework itself comes with built in support for OTEL and GenAI semantic conventions, and it can also be instrumented with an auto-instrumentation package such as `opentelemetry-instrument-langchain` . It is also possible to send Generative AI semantic conventions [telemetry](https://opentelemetry.io/docs/specs/semconv/gen-ai/) and [spans](https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-agent-spans/) by defining a custom tracer. AgentCore supports use of the following instrumentation libraries in your agent framework:
+  [OpenInference](https://github.com/Arize-ai/openinference) 
+  [Openllmetry](https://github.com/traceloop/openllmetry) 
+  [OpenLit](https://github.com/openlit/openlit) 
+  [Traceloop](https://www.traceloop.com/docs/introduction) 

To view this data in the CloudWatch console generative AI observability page and in Amazon CloudWatch, you need to add the AWS Distro for Open Telemetry (ADOT) SDK to your agent code.

**Note**  
With AgentCore, you can also view metrics for agents that aren’t running in the AgentCore runtime. Additional setup steps are required to configure telemetry outputs for non-AgentCore agents. See the instructions in [Enabling observability for agents hosted outside of AgentCore](#observability-configure-3p) to learn more.

To add ADOT support and enable AgentCore observability, follow the steps in the following procedure.

 **Add observability to your AgentCore agent** 

1. Ensure that your framework is configured to emit traces. For example, in the Strands framework, the tracer object must be configured to instruct Strands to emit Open Telemetry (OTEL) logs.

1. Add the ADOT SDK and boto3 to your agent’s dependencies. For Python, add the following to your `requirements.txt` file:

   ```
   aws-opentelemetry-distro>=0.10.0
   boto3
   ```

   Alternatively, you can install the dependencies directly:

   ```
   pip install aws-opentelemetry-distro>=0.10.0 boto3
   ```

1. Execute your agent code using the OpenTelemetry auto-instrumentation command:

   ```
   opentelemetry-instrument python my_agent.py
   ```

   This auto-instrumentation approach automatically adds the SDK to the Python path. You may already be using this approach as part of your standard OpenTelemetry implementation.

   For containerized environment (such as docker) add the following command:

   ```
   CMD ["opentelemetry-instrument", "python", "main.py"]
   ```

   When using ADOT, in order to propagate session id correctly, define the `X-Amzn-Bedrock-AgentCore-Runtime-Session-Id` in the request header. ADOT then sets the session\$1id correctly in the downstream headers.

   To propoagate a trace ID, invoke the AgentCore runtime with the parameter `traceId=<traceId>` set.

   You can also invoke your agent with additional headers for additional observability options. See [Enhanced AgentCore runtime observability with custom headers](#observability-configure-invoke) to learn more.

## Enabling observability for agents hosted outside of AgentCore


To enable observability for agents hosted outside of the AgentCore runtime, first follow the steps in the previous sections to enable CloudWatch Transaction Search and add the ADOT SDK to your code.

For agents running outside of the AgentCore runtime, you also need to create an agent log-group which you include in your environment variables.

Configure your AWS environment variables, and then set your Open Telemetry environment variables as shown in the following.

 AWS environment variables

```
AWS_ACCOUNT_ID=<account id>
AWS_DEFAULT_REGION=<default region>
AWS_REGION=<region>
AWS_ACCESS_KEY_ID=<access key id>
AWS_SECRET_ACCESS_KEY=<secret key>
```

OTEL environment variables

```
AGENT_OBSERVABILITY_ENABLED=true
OTEL_PYTHON_DISTRO=aws_distro
OTEL_PYTHON_CONFIGURATOR=aws_configurator # required for ADOT Python only
OTEL_RESOURCE_ATTRIBUTES=service.name=<agent-name>,aws.log.group.names=/aws/bedrock-agentcore/runtimes/<agent-id>,cloud.resource_id=<AgentEndpointArn:AgentEndpointName> # endpoint is optional
OTEL_EXPORTER_OTLP_LOGS_HEADERS=x-aws-log-group=/aws/bedrock-agentcore/runtimes/<agent-id>,x-aws-log-stream=runtime-logs,x-aws-metric-namespace=bedrock-agentcore
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
OTEL_TRACES_EXPORTER=otlp
```

Replace `<agent-name>` with your agent’s name and `<agent-id>` with a unique identifier for your agent.

**Note**  
(Optional) For Agent Frameworks other than Strands, LangChain, CrewAI: You may need to add an additional SDK and code to send Generative AI semantic conventions telemetry and spans. CloudWatch AgentCore Observability supports use of the following instrumentation libraries in your agent framework: \$1 [OpenInference](https://github.com/Arize-ai/openinference) \$1 [Openllmetry](https://github.com/traceloop/openllmetry) \$1 [OpenLit](https://github.com/openlit/openlit) \$1 [Traceloop](https://www.traceloop.com/docs/introduction) 

### Session ID support


To propagate session ID, you need to invoke using session identifier in the OTEL baggage:

```
from opentelemetry import baggage

ctx = baggage.set_baggage("session.id", session_id) # Set the session.id in baggage
attach(ctx) # Attach the context to make it active token
```

## Enabling observability for AgentCore runtime, memory, gateway, built-in tools, and identity resources


When you create an AgentCore runtime resource (agent), by default, AgentCore runtime creates a CloudWatch log group for the service-provided logs. However, for memory, gateway, and built-in tool resources, AgentCore doesn’t configure log destinations for you automatically.

For memory and gateway resources, you can configure log destinations either in the console or by using an AWS SDK. If you use the console to configure a CloudWatch Logs destination, the default log group name for memory and gateway resources has the form `/aws/vendedlogs/bedrock-agentcore/{resource-type}/APPLICATION_LOGS/{resource-id}` , where `{resource-type}` is either `memory` or `gateway`.

For memory and gateway logs, you can also configure log destinations in Amazon S3 logs or Firehose stream logs using the AgentCore console. To learn more about storing logs in Amazon S3 or Firehose, see [Uploading, downloading, and working with objects in Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/uploading-downloading-objects.html) and [Creating an Amazon Data Firehose delivery stream](https://docs.aws.amazon.com/firehose/latest/dev/basic-create.html).

To learn more about the log data output by AgentCore for memory and gateway resources see [Provided log data (memory)](observability-memory-metrics.md#memory_logs_summary) or [Provided log data (gateway)](observability-gateway-metrics.md#observability-gateway-logs-provided).

For built-in tool resources, the AgentCore service doesn’t provide logs by default, but you can output your own logs from your code. If you supply your own log outputs, you need to manually configure log destinations to store this data.

To see what observability data AgentCore provides by default for each resource type, see [Amazon Bedrock AgentCore generated observability data](observability-service-provided.md).

### Configure log destinations using the console


To configure log destinations for memory or gateway logs in the AgentCore console, use the following procedures.

**Example**  

1. ====== To configure log delivery for memory resources (console)

1. Open the [Memory](https://console.aws.amazon.com/bedrock-agentcore/memory) page in the AgentCore console.

1. In the **Memory** pane, select the memory you want to configure a log destination for.

1. Scroll down to the **Log delivery** pane and choose **Add**.

1. From the dropdown list, select the type of log destination you want to add (CloudWatch Logs group, Amazon S3 bucket, or Amazon Data Firehose).

1. For **Log type** , select **APPLICATION\$1LOGS**.

1. For Amazon S3 and Firehose destinations, enter a **Delivery destination ARN** . For CloudWatch Logs, the **Destination log group** is already populated with a default value.

1. (Optional) For CloudWatch Logs destinations, to change the default log group, enter a new log group name or select an existing log group under **Destination log group**.

1. (Optional) To change the fields that are captured in each log record or the logs' output format, expand **Additional settings - optional** , and modify the **Field selection** , **Output format** , and **Field delimiter** to your desired configuration.

1. Choose **Add**.

1. ====== To configure log delivery for gateway resources (console)

1. Open the [Gateways](https://console.aws.amazon.com/bedrock-agentcore/toolsAndGateways) page in the AgentCore console.

1. In the **Gateways** pane, select the gateway you want to configure a log destination for.

1. Scroll down to the **Log delivery** pane and choose **Add**.

1. From the dropdown list, select the type of log destination you want to add (CloudWatch Logs group, Amazon S3 bucket, or Amazon Data Firehose).

1. For Amazon S3 and Firehose destinations, enter a **Delivery destination ARN** . For CloudWatch Logs, the **Destination log group** is already populated with a default value.

1. (Optional) For CloudWatch Logs destinations, to change the default log group, enter a new log group name or select an existing log group under **Destination log group**.

1. (Optional) To change the fields that are captured in each log record or the logs' output format, expand **Additional settings - optional** , and modify the **Field selection** , **Output format** , and **Field delimiter** to your desired configuration.

1. Choose **Add**.

1. ====== To configure log delivery for agent runtime resources (console)

1. Open the [Agent Runtime](https://console.aws.amazon.com/bedrock-agentcore/agents) page in the AgentCore console.

1. In the **Runtime agents** pane, select the runtime agent for which you want to configure a log destination.

1. Scroll down to the **Log delivery** pane and from the **Add** drop-down, choose the **Logging destination** - either Amazon CloudWatch Logs, Amazon S3, or Amazon Data Firehose.

1. Configure the following log delivery details and then choose **Add** :
   + For **Log type** , choose **APPLICATION\$1LOGS**.
   + If using Amazon CloudWatch Logs as the logging destination, specify the destination log group.
   + If using Amazon S3 as the logging destination, specify the destination Amazon S3 bucket.
   + If using Amazon Data Firehose as the logging destination, specify a destination delivery stream.

1. Verify that the log delivery status is set to **Delivery active**.

1. ====== To configure log delivery for built-in tools resources (console)

1. Open the [Built-in tools](https://console.aws.amazon.com/bedrock-agentcore/builtInTools) page in the AgentCore console.

1. In the **Built-in tools** pane, either in the **Code interperter tools** or the **Browser tools** tab, select the code interpreter tool or browser tool for which you want to configure a log destination.

1. Scroll down to the **Log delivery** pane and from the **Add** drop-down, choose the **Logging destination** - either Amazon CloudWatch Logs, Amazon S3, or Amazon Data Firehose.

1. Configure the following log delivery details and then choose **Add** :
   + For **Log type** , choose **APPLICATION\$1LOGS**.
   + If using Amazon CloudWatch Logs as the logging destination, specify the destination log group.
   + If using Amazon S3 as the logging destination, specify the destination Amazon S3 bucket.
   + If using Amazon Data Firehose as the logging destination, specify a destination delivery stream.

1. Verify that the log delivery status is set to **Delivery active**.

1. WorkloadIdentity log delivery enablement is handled at the associated resource level, including agent runtime or agent gateway resources.

    **To configure WorkloadIdentity log delivery for associated resources (console)** 

1. Open either the [Gateway](https://console.aws.amazon.com/bedrock-agentcore/toolsAndGateways) or the [Agent Runtime](https://console.aws.amazon.com/bedrock-agentcore/agents) page in the AgentCore console and select an agent or a gateway for which you want to enable WorkloadIdentity logging.

1. In the **Identity** tab, scroll down to the **Log delivery** pane and from the **Add** drop-down, choose the **Logging destination** - either Amazon CloudWatch Logs, Amazon S3, or Amazon Data Firehose.

1. Configure the following log delivery details and then choose **Add** :
   + For **Log type** , choose **APPLICATION\$1LOGS**.
   + If using Amazon CloudWatch Logs as the logging destination, specify the destination log group.
   + If using Amazon S3 as the logging destination, specify the destination Amazon S3 bucket.
   + If using Amazon Data Firehose as the logging destination, specify a destination delivery stream.

1. Verify that the log delivery status is set to **Delivery active**.

### Configure tracing delivery to CloudWatch using the console


This section describes how to enable trace delivery to CloudWatch to track the flow of interactions through your application allowing you to visualize requests, identify performance bottlenecks, troubleshoot errors, and optimize performance.

**Example**  

1. ====== To configure tracing for memory resources (console)

1. Open the [Memory](https://console.aws.amazon.com/bedrock-agentcore/memory) page in the AgentCore console.

1. In the **Memory** pane, select the memory resource for which you want to enable tracing.

1. In the **Tracing** pane, choose **Edit** , toggle the widget to **Enable** , and then choose **Save**.

1. ====== To configure tracing for runtime resources (console)

1. Open the [Agents runtime](https://console.aws.amazon.com/bedrock-agentcore/agents) page in the AgentCore console.

1. In the **Runtime agents** pane, select the agent for which you want to enable tracing.

1. In the **Tracing** pane, choose **Edit** , toggle the widget to **Enable** , and then choose **Save**.

   Tracing will be enabled for the selected agent and spans will be available in the `aws/spans` log group.

    **To configure WorkloadIdentity tracing for runtime resources (console)** 

1. Open the [Agents runtime](https://console.aws.amazon.com/bedrock-agentcore/agents) page in the AgentCore console.

1. In the **Runtime agents** pane, choose the **Identity** tab, and then select the agent for which you want to enable WorkloadIdentity tracing.

1. In the **Tracing** pane, choose **Edit** , toggle the widget to **Enable** , and then choose **Save**.

   WorkloadIdentity tracing will be enabled for the selected agent and spans will be available in the `aws/spans` log group.

1. ====== To configure tracing for built-in tools (console)

1. Open the [Built-in tools](https://console.aws.amazon.com/bedrock-agentcore/builtInTools) page in the AgentCore console.

1. In the **Built-in tools** pane, either in the **Code interperter tools** or the **Browser tools** tab, select the code interpreter tool or browser tool for which you want to enable tracing.

1. In the **Tracing** pane, choose **Edit** , toggle the widget to **Enable** , and then choose **Save**.

   Tracing will be enabled for the selected code interperter or browser tool and spans will be available in the `aws/spans` log group.

1. ====== To configure tracing for gateway resources (console)

1. Open the [Gateways](https://console.aws.amazon.com/bedrock-agentcore/toolsAndGateways) page in the AgentCore console.

1. In the **Gateways** pane, select the gateway for which you want to enable tracing.

1. In the **Tracing** pane, choose **Edit** , toggle the widget to **Enable** , and then choose **Save**.

   Tracing will be enabled for the selected gateway and spans will be available in the `aws/spans` log group.

    **To configure WorkloadIdentity tracing for gateway resources (console)** 

1. Open the [Gateways](https://console.aws.amazon.com/bedrock-agentcore/toolsAndGateways) page in the AgentCore console.

1. In the **Gateways** pane, choose the **Identity** tab, and then select the gateway for which you want to enable WorkloadIdentity tracing.

1. In the **Tracing** pane, choose **Edit** , toggle the widget to **Enable** , and then choose **Save**.

   WorkloadIdentity tracing will be enabled for the selected gateway and spans will be available in the `aws/spans` log group.
**Note**  
You must have [CloudWatch Transaction Search](#observability-configure-builtin-cw) enabled before you can enable tracing.

1. ====== To configure tracing for identity resources (console)

1. Open the [Identity](https://console.aws.amazon.com/bedrock-agentcore/identity) page in the AgentCore console.

1. In the **Identity** pane, select the OAuth client or the API key for which you want to enable tracing.

1. In the **Tracing** pane, choose **Edit** , toggle the widget to **Enable** , and then choose **Save**.

### Configure CloudWatch resources using an AWS SDK


 **To configure a delivery source for logs and traces (SDK)** 
+ Run the following Python code to configure CloudWatch for your memory, gateway, and built-in tool resources. Note that delivery sources and destinations for tracing are only applicable for memory and gateway resources.

```
import boto3

def enable_observability_for_resource(resource_arn, resource_id, account_id, region='us-east-1'):
    """
    Enable observability for a Bedrock AgentCore resource (e.g., Memory Store)
    """
    logs_client = boto3.client('logs', region_name=region)

    # Step 0: Create new log group for vended log delivery
    log_group_name = f'/aws/vendedlogs/bedrock-agentcore/{resource_id}'
    logs_client.create_log_group(logGroupName=log_group_name)
    log_group_arn = f'arn:aws:logs:{region}:{account_id}:log-group:{log_group_name}'

    # Step 1: Create delivery source for logs
    logs_source_response = logs_client.put_delivery_source(
        name=f"{resource_id}-logs-source",
        logType="APPLICATION_LOGS",
        resourceArn=resource_arn
    )

    # Step 2: Create delivery source for traces
    traces_source_response = logs_client.put_delivery_source(
        name=f"{resource_id}-traces-source",
        logType="TRACES",
        resourceArn=resource_arn
    )

    # Step 3: Create delivery destinations
    logs_destination_response = logs_client.put_delivery_destination(
        name=f"{resource_id}-logs-destination",
        deliveryDestinationType='CWL',
        deliveryDestinationConfiguration={
            'destinationResourceArn': log_group_arn,
        }
    )

    # Traces required
    traces_destination_response = logs_client.put_delivery_destination(
        name=f"{resource_id}-traces-destination",
        deliveryDestinationType='XRAY'
    )

    # Step 4: Create deliveries (connect sources to destinations)
    logs_delivery = logs_client.create_delivery(
        deliverySourceName=logs_source_response['deliverySource']['name'],
        deliveryDestinationArn=logs_destination_response['deliveryDestination']['arn']
    )

    # Traces required
    traces_delivery = logs_client.create_delivery(
        deliverySourceName=traces_source_response['deliverySource']['name'],
        deliveryDestinationArn=traces_destination_response['deliveryDestination']['arn']
    )

    print(f"Observability enabled for {resource_id}")
    return {
        'logs_delivery_id': logs_delivery['id'],
        'traces_delivery_id': traces_delivery['id']
    }

# Usage example
resource_arn = "arn:aws:bedrock-agentcore:us-east-1:123456789012:memory/my-memory-id"
resource_id = "my-memory-id"
account_id = "123456789012"

delivery_ids = enable_observability_for_resource(resource_arn, resource_id, account_id)
```

## Enhanced AgentCore runtime observability with custom headers


You can invoke your agent with additional HTTP headers to provide enhanced observability options. The following example shows invocations including optional additional header requests for agents hosted in the AgentCore runtime.

 **Example Boto3 invocation** 

```
def invoke_agent(agent_id, payload, session_id=None):
    client = boto3.client("bedrock-agentcore", region="us-west-2")
    response = client.invoke_agent_runtime(
        agentRuntimeArn="arn:aws:bedrock-agentcore:us-west-2:111122223333:runtime/test_agent_boto2-nIg2xk3VSR",
        runtimeSessionId="12345678-1234-5678-9abc-123456789012",
        payload='{"query": "Plan a weekend in Seattle"}',
    )
```

You can include the following optional headers when invoking your agent to enhance observability and tracing capabilities:


| Header | Description | Sample Value | Technical Explanation | 
| --- | --- | --- | --- | 
|  X-Amzn-Trace-Id  |  Trace ID for request tracking (X-Ray Format)  |  Root=1-5759e988-bd862e3fe1be46a994272793;Parent=53995c3f42cd8ad8;Sampled=1  |  Used for distributed tracing across AWS services. Contains root ID (request origin), parent ID (previous service), and sampling decision for tracing. Sampling=1 means 100% sampling. Parent is X-Ray Trace format as well. OTEL will auto-generate trace IDs if not supplied.  | 
|  traceparent  |  W3C standard tracing header  |  00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01  |  W3C format that includes version, trace ID, parent ID, and flags. Required for cross-service trace correlation when using modern tracing systems.  | 
|  X-Amzn-Bedrock-AgentCore-Runtime-Session-Id  |  AgentCore session identifier  |  a1b2c3d4-5678-90ab-cdef-EXAMPLEaaaaa  |  Identifies a user session within the AgentCore system. Helps with session-based analytics and troubleshooting.  | 
|  mcp-session-id  |  MCP session identifier  |  mcp-a1b2c3d4-5678-90ab-cdef-EXAMPLEaaaaa  |  Identifies a session in the Managed Cloud Platform. Enables tracing of operations across the MCP ecosystem.  | 
|  tracestate  |  Additional tracing state information  |  congo=t61rcWkgMzE,rojo=00f067aa0ba902b7  |  Vendor-specific tracing information. Conveys additional context for tracing systems beyond what’s in traceparent.  | 
|  baggage  |  Context propagation for distributed tracing  |  userId=alice,serverRegion=us-east-1  |  Key-value pairs that propagate user-defined properties across service boundaries for contextual logging and analysis.  | 

## Enhanced AgentCore built-in tools observability with custom headers


You can invoke your Build-in Tools with additional HTTP headers to provide enhanced observability options. You can include the following optional headers when integrating following Build-in Tools APIs to enhance observability and tracing capabilities:

The following APIs support custom headers:
+ StartCodeInterpreterSession
+ InvokeCodeInterpreter
+ StopCodeInterpreterSession
+ StartBrowserSession
+ StopBrowserSession


| Header | Description | Sample Value | Technical Explanation | 
| --- | --- | --- | --- | 
|  X-Amzn-Trace-Id  |  Trace ID for request tracking (X-Ray Format)  |  Root=1-5759e988-bd862e3fe1be46a994272793;Parent=53995c3f42cd8ad8;Sampled=1  |  Used for distributed tracing across AWS services. Contains root ID (request origin), parent ID (previous service), and sampling decision for tracing. Sampling=1 means 100% sampling. Parent is X-Ray Trace format as well. OTEL will auto-generate trace IDs if not supplied.  | 
|  traceparent  |  W3C standard tracing header  |  00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01  |  W3C format that includes version, trace ID, parent ID, and flags. Required for cross-service trace correlation when using modern tracing systems.  | 

## Enhanced AgentCore identity observability with custom headers


You can invoke your identity resources with additional HTTP headers to provide enhanced observability options. You can include the following optional headers when integrating following identity APIs to enhance observability and tracing capabilities:

The following APIs support custom headers:
+ GetWorkloadAccessToken
+ GetWorkloadAccessTokenForJWT
+ GetWorkloadAccessTokenForUserId
+ GetResourceOauth2Token
+ GetResourceAPIKey


| Header | Description | Sample Value | Technical Explanation | 
| --- | --- | --- | --- | 
|  X-Amzn-Trace-Id  |  Trace ID for request tracking (X-Ray Format)  |  Root=1-5759e988-bd862e3fe1be46a994272793;Parent=53995c3f42cd8ad8;Sampled=1  |  Used for distributed tracing across AWS services. Contains root ID (request origin), parent ID (previous service), and sampling decision for tracing. Sampling=1 means 100% sampling. Parent is X-Ray Trace format as well. OTEL will auto-generate trace IDs if not supplied.  | 

## Observability best practices


Consider the following best practices when implementing observability for agents in AgentCore:
+ Use consistent session IDs - When possible, reuse the same session ID for related requests to maintain context across interactions.
+ Implement distributed tracing - Use the provided headers to enable end-to-end tracing across your application components.
+ Add custom attributes - Enhance your traces and metrics with custom attributes that provide additional context for troubleshooting and analysis.
+ Monitor resource usage - Pay attention to memory usage metrics to optimize your agent’s performance.
+ Set up alerts - Configure CloudWatch alarms to help notify you of potential issues before they impact your users.

## Using other observability platforms


To integrate agents hosted in the AgentCore runtime with other observability platforms to capture and view telemetry outputs, set the following environment variable:

```
DISABLE_ADOT_OBSERVABILITY=true
```

Setting this variable to `true` unsets the AgentCore runtime’s default ADOT environment variables, ensuring that none of the default ADOT configurations are set.

# Understand observability for agentic resources in AgentCore
Observability concepts

This section defines the concepts of sessions, traces and spans as they relate to monitoring and observability of agents.

**Topics**
+ [

## Sessions
](#sessions)
+ [

## Traces
](#traces)
+ [

## Spans
](#agent_spans)
+ [

## Relationship Between Sessions, Traces, and Spans
](#relationship)

## Sessions


A session represents a complete interaction context between a user and an agent. Sessions encapsulate the entire conversation or interaction flow, maintaining state and context across multiple exchanges. Each session has a unique identifier and captures the full lifecycle of user engagement with the agent, from initialization to termination.

Sessions provide the following capabilities for agents:
+ Context persistence across multiple interactions within the same conversation
+ State management for maintaining user-specific information
+ Conversation history tracking for contextual understanding
+ Resource allocation and management for the duration of the interaction
+ Isolation between different user interactions with the same agent

From an observability perspective, sessions provide a high-level view of user engagement patterns, allowing you to monitor agent performance across metrics, traces, and spans and to understand how users interact with your agents over time and across different use cases.

By default, AgentCore provides a set of observability metrics at the session level for agents that are running in the AgentCore runtime. You can view the runtime metrics in the Amazon CloudWatch console on the generative AI observability page. This page offers a variety of graphs and visualizations to help you interpret your agents' data. AgentCore also outputs a default set of metrics for memory resources, gateway resources, and built-in tools. All of these metrics can be viewed in CloudWatch. In addition to the provided metrics, logs and spans are provided by default for memory resources, and by instrumenting your agent code, you can capture custom metrics, logs, and spans for your agent which can also be viewed on the CloudWatch generative AI observability page. See the following sections and [View observability data for your Amazon Bedrock AgentCore agents](observability-view.md) to learn more.

## Traces


A trace represents a detailed record of a single request-response cycle beginning from with an agent invocation and may include additional calls to other agents. Traces capture the complete execution path of a request, including all internal processing steps, external service calls, decision points, and resource utilization. Each trace is associated with a specific session and provides granular visibility into the agent’s behavior for a particular interaction.

Traces include the following components for agents:
+ Request details including timestamps, input parameters, and context
+ Processing steps showing the sequence of operations performed
+ Tool invocations with input/output parameters and execution times
+ Resource utilization metrics such as processing time
+ Error information including exception details and recovery attempts
+ Response generation details and final output

From an observability perspective, traces provide deep insights into the internal workings of your agents, allowing you to troubleshoot issues, optimize performance, and understand behavior patterns. By analyzing trace data, you can identify bottlenecks, detect anomalies, and verify that your agent is functioning as expected across different scenarios and inputs.

To gather trace data, you need to instrument your agent code using the AWS Distro for Open Telemetry (ADOT). See [Enabling observability in agent code for AgentCore-hosted agents](observability-configure.md#observability-configure-custom) and [Enabling observability for agents hosted outside of AgentCore](observability-configure.md#observability-configure-3p) to learn more.

## Spans


A span represents a discrete, measurable unit of work within an agent’s execution flow. Spans capture fine-grained operations that occur during request processing, providing detailed visibility into the internal components and steps that make up a complete trace. Each span has a defined start and end time, creating a precise timeline of agent activities and their durations.

Spans include the following essential attributes for agent observability:
+ Operation name identifying the specific function or process being executed
+ Timestamps marking the exact start and end times of the operation
+ Parent-child relationships showing how operations nest within larger processes
+ Tags and attributes providing contextual metadata about the operation
+ Events marking significant occurrences within the span’s lifetime
+ Status information indicating success, failure, or other completion states
+ Resource utilization metrics specific to the operation

Spans form a hierarchical structure within traces, with parent spans encompassing child spans that represent more granular operations. For example, a high-level "process user query" span might contain child spans for "parse input," "retrieve context," "generate response," and "format output." This hierarchical organization creates a detailed execution tree that reveals the complete flow of operations within the agent.

By default, AgentCore outputs a set of span data for memory resources only. This data can be viewed in CloudWatch Logs and CloudWatch Application signals. To record span data for your agents or gateway resources, you need to instrument your agent. See [Enabling observability in agent code for AgentCore-hosted agents](observability-configure.md#observability-configure-custom) and [Enabling observability for agents hosted outside of AgentCore](observability-configure.md#observability-configure-3p) to learn more.

## Relationship Between Sessions, Traces, and Spans


Sessions, traces, and spans form a three-tiered hierarchical relationship in the observability framework for agents. A session contains multiple traces, with each trace representing a discrete interaction within the broader context of the session. Each trace, in turn, contains multiple spans that capture the fine-grained operations and steps within that interaction. This hierarchical structure allows you to analyze agent behavior at different levels of granularity, from high-level session patterns to mid-level interaction flows to detailed execution paths for specific operations.

The relationship between these three observability components can be visualized as:
+ Sessions (highest level) - Represent complete user conversations or interaction contexts
+ Traces (middle level) - Represent individual request-response cycles within a session
+ Spans (lowest level) - Represent specific operations or steps within a trace

This multi-tiered relationship enables several important observability capabilities:
+ Contextual analysis of individual interactions within their broader conversation flow
+ Correlation of related requests across a user’s interaction journey
+ Progressive troubleshooting from session-level anomalies to trace-level patterns to span-level root causes
+ Comprehensive performance profiling across different temporal and functional dimensions
+ Holistic understanding of agent behavior patterns and evolution throughout a conversation
+ Precise identification of performance bottlenecks at the operation level through span analysis

While traces provide visibility into complete request-response cycles, spans offer deeper insights into the internal workings of those cycles. Spans reveal exactly which operations consume the most time, where errors originate, and how different components interact within a single trace. This granularity is particularly valuable when troubleshooting complex issues or optimizing performance in sophisticated agent implementations.

By leveraging session, trace, and span data in your observability strategy, you can gain comprehensive insights into your agent’s behavior, performance, and effectiveness at multiple levels of detail. This multi-layered approach to observability supports continuous improvement, robust troubleshooting, and informed optimization of your agent implementations, from high-level conversation patterns down to individual operation performance.

# Amazon Bedrock AgentCore generated observability data
AgentCore generated observability data

For agents running in the AgentCore runtime, AgentCore automatically generates a set of session metrics which you can view in the Amazon CloudWatch Logs generative AI observability page. You can also use AgentCore observability to monitor the performance of memory, gateway, and built-in tool resources, even if you’re not using the AgentCore runtime to host your agents. For memory, gateway, and built-in tool resources, AgentCore outputs a default set of data to CloudWatch.

The following table summarizes the default data provided for each resource type, and where the data is available.


| Resource type | Service-provided data | Available in CloudWatch gen AI observability | Available in CloudWatch (Logs or metrics) | 
| --- | --- | --- | --- | 
|  Agent  |  Metrics  |  Yes  |  Yes  | 
|  Memory  |  Metrics, Spans\$1, Logs\$1  |  No  |  Yes  | 
|  Gateway  |  Metrics  |  No  |  Yes  | 
|  Tools  |  Metrics  |  No  |  Yes  | 
|  Policy  |  Metrics, Spans\$1\$1  |  Yes  |  Yes  | 
+ memory spans and logs require enablement. See [Add observability to your Amazon Bedrock AgentCore resources](observability-configure.md) to learn more.
  + Policy related observability is displayed under the AgentCore Gateway tab in CloudWatch gen AI observability.

**Note**  
To view metrics, spans, and traces for AgentCore, you need to perform a one-time setup process to enable CloudWatch Transaction Search. To learn more see [Enabling AgentCore observability](observability-configure.md#observability-configure-builtin).

Refer to the following topics to learn about the default service-provided observability metrics for AgentCore runtime, memory, and gateway resources.

By instrumenting your agent code, you can also gather more detailed trace and span data as well as custom metrics. See [Enabling observability in agent code for AgentCore-hosted agents](observability-configure.md#observability-configure-custom) to learn more.

**Topics**
+ [

# AgentCore generated runtime observability data
](observability-runtime-metrics.md)
+ [

# AgentCore generate memory observability data
](observability-memory-metrics.md)
+ [

# AgentCore generated gateway observability data
](observability-gateway-metrics.md)
+ [

# AgentCore generated built-in tools observability data
](observability-tool-metrics.md)
+ [

# AgentCore generated identity observability data
](observability-identity-metrics.md)
+ [

# AgentCore generated Policy in AgentCore observability data
](observability-policy-metrics.md)

# AgentCore generated runtime observability data
Runtime observability data

The runtime metrics provided by AgentCore give you visibility into your agent execution activity levels, processing latency, resource utilization, and error rates. AgentCore also provides aggregated metrics for total invocations and sessions.

**Topics**
+ [

## Observability runtime metrics
](#observability-runtime-metrics-one)
+ [

## Resource usage metrics and logs
](#observability-runtime-resource-usage-metrics-logs)
+ [

## Provided span data
](#observability-runtime-span-data)
+ [

## Application log data
](#observability-runtime-application-log-data)
+ [

## Error types
](#observability-runtime-metrics-errors)

## Observability runtime metrics


The following list describes the runtime metrics provided by AgentCore. Runtime metrics are batched at one minute intervals. To learn more about viewing runtime metrics, see [View observability data for your Amazon Bedrock AgentCore agents](observability-view.md).

Invocations  
Shows the total number of requests made to the Data Plane API. Each API call counts as one invocation, regardless of the request payload size or response status.

Invocations (aggregated)  
Shows the total number of invocations across all resources

Throttles  
Displays the number of requests throttled by the service due to exceeding allowed TPS (Transactions Per Second) or quota limits. These requests return ThrottlingException with HTTP status code 429. Monitor this metric to determine if you need to review your service quotas or optimize request patterns.

System Errors  
Shows the number of server-side errors encountered by AgentCore during request processing. High levels of server-side errors can indicate potential infrastructure or service issues that require investigation. See [Error types](#observability-runtime-metrics-errors) for a list of possible error codes.

User Errors  
Represents the number of client-side errors resulting from invalid requests. These require user action to resolve. High levels of client-side errors can indicate issues with request formatting or permissions that need to be addressed. See [Error types](#observability-runtime-metrics-errors) for a list of possible error codes.

Latency  
The total time elapsed between receiving the request and sending the final response token. Represents complete end-to-end processing time of the request.

Total Errors  
The total number of system and user errors. In the Amazon Bedrock AgentCore console, this metric displays the number of errors as a percentage of the total number of invocations.

Session Count  
Shows the total number of agent sessions. Useful for monitoring overall platform usage, capacity planning, and understanding user engagement patterns.

Sessions (aggregated)  
Shows the total number of sessions across all resources.

ActiveStreamingConnections  
 **(WebSocket only)** Shows the current number of active WebSocket connections per agent. Monitor this metric to understand connection usage and detect connection drops or spikes for capacity planning. The only meaningful statistic is a 1-minute sum.

InboundStreamingBytesProcessed  
 **(WebSocket only)** Displays the total number of bytes successfully processed in WebSocket frames received from clients to agent containers. Use this metric to monitor data throughput and identify usage patterns.

OutboundStreamingBytesProcessed  
 **(WebSocket only)** Shows the total number of bytes successfully processed in WebSocket frames sent from agent containers to clients. Monitor this metric to understand agent response patterns and ensure successful data transmission.

## Resource usage metrics and logs


Amazon Bedrock AgentCore runtime provides comprehensive resource usage telemetry, including CPU and memory consumption metrics for your runtime resources.

**Note**  
Resource usage data may be delayed by up to 60 minutes and precision might differ across metrics.

 **Vended metrics** 

Amazon Bedrock AgentCore runtime automatically provides resource usage metrics at account, agent runtime, and agent endpoint levels. These metrics are published at 1-minute resolution. Amazon CloudWatch aggregation and metric data retention will follow standard Amazon CloudWatch data retention polices. For more information, see [https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch\$1concepts.html\$1Metric](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html#Metric).

Here are the dimension sets and metrics available for monitoring your resources:


| Name | Dimensions | Description | 
| --- | --- | --- | 
|  CPUUsed-vCPUHours  |  Service; Service, Resource; Service, Resource, Name  |  The total amount of virtual CPU consumed in vCPU-Hours unit, available at the resource and account levels. Useful for resource tracking and estimated billing visibility.  | 
|  MemoryUsed-GBHours  |  Service; Service, Resource; Service, Resource, Name  |  The total amount of memory consumed in GB-Hours unit, available at the resource and account levels. Useful for resource tracking and estimated billing visibility.  | 

Dimension explanation
+  **Service** - AgentCore.Runtime
+  **Resource** - Agent Arn
+  **Name** - Agent Endpoint name, in the format of AgentName::EndpointName

Account level metrics are available in Amazon CloudWatch Bedrock AgentCore Observability Console under the **Runtime** tab. The dashboard displays Memory and CPU usage graphs generated from these metrics, representing total resource usage across all agents in your account within the region.

Agent Endpoint level metrics are available in AgentEndpoint page of Amazon CloudWatch Bedrock AgentCore Observability Console. The dashboard displays Memory and CPU usage graphs generated from these metrics, representing total resource usage across all sessions invoked by the specified Agent Endpoint.

**Note**  
Telemetry data is provided for monitoring purposes. Actual billing is calculated based on metered usage data and may differ from telemetry values due to aggregation timing, reconciliation processes, and measurement precision. Refer to your AWS billing statement for authoritative charges.

 **Vended logs** 

Bedrock AgentCore Runtime provides vended logs for session-level usage metrics at 1-second granularity. Each log record contains resource consumption data including CPU usage (agent.runtime.vcpu.hours.used) and memory consumption (agent.runtime.memory.gb\$1hours.used).

Each log record will have following schema:


| Log type | Log fields | Description | 
| --- | --- | --- | 
|  USAGE\$1LOGS  |  event\$1timestamp, resource\$1arn, service.name, cloud.provider, cloud.region, account.id, region, resource.id, session.id, agent.name, elapsed\$1time\$1seconds, agent.runtime.vcpu.hours.used, agent.runtime.memory.gb\$1hours.used  |  Resource Usage Logs for session-level resource tracking.  | 

To enable USAGE\$1LOG log type for your agents, see [Add observability to your Amazon Bedrock AgentCore resources](observability-configure.md) . The logs are then displayed in the configured destination (AWS LogGroup, Amazon S3 or Amazon Kinesis Firehose) as configured.

In the Agent Session page of the Amazon CloudWatch Bedrock AgentCore Observability Console, you can see resource usage metrics generated from these logs. To optimize your metric viewing experience, select your desired time range using the selector in the top right to focus on specific CPU and Memory Usage data.

**Note**  
Telemetry data is provided for monitoring purposes. Actual billing is calculated based on metered usage data and may differ from telemetry values due to aggregation timing, reconciliation processes, and measurement precision. Refer to your AWS billing statement for authoritative charges.

## Provided span data


To enhance observability, AgentCore provides structured spans that provide visibility into agent runtime invocations. To enable this span data, you need to enable observability on your agent resource. See [Add observability to your Amazon Bedrock AgentCore resources](observability-configure.md) for steps and details. This span data is available in AWS CloudWatch Logs aws/spans log group. The following table defines the operation for which spans are created and the attributes for each captured span.


| Operation name | Span attributes | Description | 
| --- | --- | --- | 
|  InvokeAgentRuntime  |  aws.operation.name, aws.resource.arn, aws.request\$1id, aws.agent.id, aws.endpoint.name, aws.account.id, session.id, latency\$1ms, error\$1type, aws.resource.type, aws.xray.origin, aws.region  |  Invokes the agent runtime.  | 
+ aws.operation.name - the operation name (InvokeAgentRuntime)
+ aws.resource.arn - the Amazon resource name for the agent runtime
+ aws.request\$1id - request ID for the invocation
+ aws.agent.id - the unique identifier for the agent runtime
+ aws.endpoint.name - the name of the endpoint used to invoke the agent runtime
+ aws.account.id - customer’s account id
+ session.id - the session ID for the invocation
+ latency\$1ms - the latency of the request in milliseconds
+ error\$1type - either throttle, system, or user (only present if error)
+ aws.resource.type - the CFN resource type
+ aws.xray.origin - the CFN resource type used by x-ray to identify the service
+ aws.region - the region the customer resource exists in

## Application log data


AgentCore provides structured Application logs that help you gain visibility into your agent runtime invocations and session-level resource consumption. This log data is provided when enabling observability on your agent resource. See [Add observability to your Amazon Bedrock AgentCore resources](observability-configure.md) for steps and details. AgentCore can output logs to CloudWatch Logs, Amazon S3, or Firehose stream. If you use a CloudWatch Logs destination, these logs are stored under your agent’s application logs or under your own custom log group.


| Log type | Log fields | Description | 
| --- | --- | --- | 
|  APPLICATION\$1LOGS  |  timestamp, resource\$1arn, event\$1timestamp, account\$1id, request\$1id, session\$1id, trace\$1id, span\$1id, service\$1name, operation, request\$1payload, response\$1payload  |  Application logs for InvokeRuntimeOperation with tracing fields, request, and response payloads  | 
+ request\$1payload - the request payload of the agent invocation
+ response\$1payload - the response from the agent invocation

## Error types


The following list defines the possible error types for user, system, and throttling errors.

 **User error codes** 
+  `InvocationError.Validation` - Client provided invalid input (400)
+  `InvocationError.ResourceNotFound` - Requested resource doesn’t exist (404)
+  `InvocationError.AccessDenied` - Client lacks permissions (403)
+  `InvocationError.Conflict` - Resource conflict (409)

 **System error codes** 
+  `InvocationError.Internal` - Internal server error (500)

 **Throttling error codes** 
+  `InvocationError.Throttling` - Rate limiting (429)
+  `InvocationError.ServiceQuota` - Service-side quota/limit reached (402)

# AgentCore generate memory observability data
Memory observability data

For the AgentCore memory resource type, AgentCore outputs metrics to Amazon CloudWatch by default. AgentCore also outputs a default set of spans and logs, if you enable these. See [Enabling observability for AgentCore runtime, memory, gateway, built-in tools, and identity resources](observability-configure.md#observability-configure-cloudwatch) to learn more about enabling spans and logs.

Refer to the following sections to learn more about the provided observability data for your agent memory stores.

## Provided memory metrics


The AgentCore memory resource type provides the following metrics by default.

Latency  
The total time elapsed between receiving the request and sending the final response token. Represents complete end-to-end processing of the request.

Invocations  
The total number of API requests made to the data plane and control plane. This metric also tracks the number of memory ingestion events.

System Errors  
Number of invocations that result in AWS server-side errors.

User Errors  
Number of invocations that result in client-side errors.

Errors  
Total number of errors that occur while processing API requests in the data plane and control plane. This metric also tracks the total errors that occur during memory ingestion.

Throttles  
Number of invocations that the system throttled. Throttled requests count as invocations, errors, and user errors.

Creation Count  
Counts the number of created memory events and memory records.

## Provided span data


To enhance observability, AgentCore provides structured spans that trace the relationship between events and the memories they generate or access. To enable this span data, you need to instrument your agent code. See [Add observability to your Amazon Bedrock AgentCore resources](observability-configure.md) to learn more.

This span data is available in full in CloudWatch Logs and CloudWatch Application Signals. To learn more about viewing observability data, see [View observability data for your Amazon Bedrock AgentCore agents](observability-view.md).

The following table defines the operations for which spans are created and the attributes for each captured span.


| Operation name | Span attributes | Description | 
| --- | --- | --- | 
|   `CreateEvent`   |   `memory.id` , `session.id` , `event.id` , `actor.id` , `throttled` , `error` , `fault`   |  Creates a new event within a memory session  | 
|   `GetEvent`   |   `memory.id` , `session.id` , `event.id` , `actor.id` , `throttled` , `error` , `fault`   |  Retrieves an existing memory event  | 
|   `ListEvents`   |   `memory.id` , `session.id` , `event.id` , `actor.id` , `throttled` , `error` , `fault`   |  Lists events within a session  | 
|  DeleteEvent  |   `memory.id` , `session.id` , `event.id` , `actor.id` , `throttled` , `error` , `fault`   |  Deletes an event from memory  | 
|  RetrieveMemoryRecords  |   `memory.id` , `namespace` , `throttled` , `error` , `fault`   |  Retrieves memory records for a given namespace  | 
|  ListMemoryRecords  |   `memory.id` , `namespace` , `throttled` , `error` , `fault`   |  Lists available memory records  | 

## Provided log data


AgentCore provides structured logs that help you monitor and troubleshoot key AgentCore Memory resource processes. To enable this log data, you need to instrument your agent code. See [Add observability to your Amazon Bedrock AgentCore resources](observability-configure.md) to learn more.

AgentCore can output logs to CloudWatch Logs, Amazon S3, or Firehose stream. If you use a CloudWatch Logs destination, these logs are stored under the default log group `/aws/vendedlogs/bedrock-agentcore/memory/APPLICATION_LOGS/{memory_id}` or under a custom log group starting with `/aws/vendedlogs/` . See [Enabling observability for AgentCore runtime, memory, gateway, built-in tools, and identity resources](observability-configure.md#observability-configure-cloudwatch) to learn more.

When the `DeleteMemory` operation is called, logs are generated for the start and completion of the deletion process. Any corresponding deletion error logs will be provided with insights into why the call failed.

We also provide logs for various stages in the long-term memory creation process, namely extraction and consolidation. When new short term memory events are provided, AgentCore extracts key concepts from responses to begin the formation of new long-term memory records. Once these have been created, they are integrated with existing memory records to create a unified store of distinct memories.

See the following breakdown to learn how each workflow helps you monitor the formation of new memories:

 **Extraction logs** 
+ Start and completion of extraction processing
+ Number of memories successfully extracted
+ Any errors in deserializing or processing input events

 **Consolidation logs:** 
+ Start and completion of consolidation processing
+ Number of memories requiring consolidation
+ Success/failure of memory additions and updates
+ Related memory retrieval status

The following table provides a more detailed breakdown of how different memory resource workflows use log fields alongside the log body itself to provide request-specific information.


| Workflow name | Log fields | Description | 
| --- | --- | --- | 
|  Extraction  |  resource\$1arn, event\$1timestamp, memory\$1strategy\$1id, namespace, actor\$1id, session\$1id, event\$1id, requestId, isError  |  Analyzes incoming conversations to generate new memories  | 
|  Consolidation  |  resource\$1arn, event\$1timestamp, memory\$1strategy\$1id, namespace, session\$1id, requestId, isError  |  Combines extracted memories with existing memories  | 

# AgentCore generated gateway observability data
Gateway observability data

The following sections describe the gateway metrics, logs, and spans output by AgentCore to Amazon CloudWatch. These metrics aren’t available on the CloudWatch generative AI observability page. Gateway metrics are batched at one minute intervals. To learn more about viewing gateway metrics, see [View observability data for your Amazon Bedrock AgentCore agents](observability-view.md).

**Note**  
To enable service-provided logs for AgentCore gateways, you need to configure the necessary CloudWatch resources. See [Enabling observability for AgentCore runtime, memory, gateway, built-in tools, and identity resources](observability-configure.md#observability-configure-cloudwatch) to learn more.

**Topics**
+ [

## Provided metrics
](#observability-gateway-metrics-provided)
+ [

## Provided log data
](#observability-gateway-logs-provided)
+ [

## Provided spans
](#observability-gateway-vended-spans)

## Provided metrics


Gateway publishes invocation and usage metrics to CloudWatch. You can view these metrics and also set up alarms to alert you when certain metrics exceed thresholds. To learn more, select a topic:

**Topics**
+ [

### Invocation metrics
](#gateway-metrics-invocation)
+ [

### Usage metrics
](#gateway-metrics-usage)
+ [

### View gateway CloudWatch metrics
](#gateway-metrics-view-console)
+ [

### Setting up CloudWatch alarms
](#gateway-advanced-observability-alarms)

### Invocation metrics


These metrics provide information about API invocations, performance, and errors.

For these metrics, the following dimensions are used:
+  **Operation** – The name of the API operation (ex. InvokeGateway).
+  **Protocol** – The name of the protocol (ex. MCP).
+  **Method** – Represents the MCP operation being invoked (ex. tools/list).
+  **Resource** – Represents the identifier of the resource (ex. gateway ARN).
+  **Name** – Represents the name of the tool.


| Metric | Description | Statistics | Units | 
| --- | --- | --- | --- | 
|  Invocations  |  The total number of requests made to each Data Plane API. Each API call counts as one invocation regardless of the response status.  |  Sum  |  Count  | 
|  Throttles  |  The number of requests throttled (status code 429) by the service.  |  Sum  |  Count  | 
|  SystemErrors  |  The number of requests which failed with 5xx status code.  |  Sum  |  Count  | 
|  UserErrors  |  The number of requests which failed with 4xx status code except 429.  |  Sum  |  Count  | 
|  Latency  |  The time elapsed between when the service receives the request and when it begins sending the first response token. In other words, initial response time.  |  Average, Minimum, Maximum, p50, p90, p99  |  Milliseconds  | 
|  Duration  |  The total time elapsed between receiving the request and sending the final response token. Represents complete end-to-end processing time of the request.  |  Average, Minimum, Maximum, p50, p90, p99  |  Milliseconds  | 
|  TargetExecutionTime  |  The total time taken to execute the target over Lambda / OpenAPI / etc. This helps determine the contribution of the target to the total Latency.  |  Average, Minimum, Maximum, p50, p90, p99  |  Milliseconds  | 

### Usage metrics


These metrics provide information about how your gateway is being used.


| Metric | Description | Statistics | Units | 
| --- | --- | --- | --- | 
|  TargetType  |  The total number of requests served by each type of target (MCP, Lambda, OpenAPI).  |  Sum  |  Count  | 

### View gateway CloudWatch metrics


For more information about viewing CloudWatch metrics, see [View available metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/viewing_metrics_with_cloudwatch.html) in the [Amazon CloudWatch User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/) . The following procedure shows you how to view metrics for your gateways:

 **To view gateway metrics in the console** 

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the left navigation pane, choose **All metrics** under the **Metrics** section.

1. Under **Browse** , from the dropdown menu that displays the current AWS Region, select the Region for which you want metrics.

1. Choose the **Bedrock-AgentCore** namespace.

1. Choose a dimension (ex. **Operation** ) or combination of dimensions (ex. **Method, Operation, Protocol** ) to view the metrics for it.

1. To add a metric to the CloudWatch graph, select the checkbox next to it.

### Setting up CloudWatch alarms


You can use the [PutMetricAlarm](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_PutMetricAlarm.html) API operation to set up CloudWatch alarms to alert you when certain metrics exceed thresholds. For example, you might want to be notified when the error rate exceeds 5% or when the latency exceeds 1 second.

The following example shows you how to create an alarm for high error rates using the AWS CLI:

```
aws cloudwatch put-metric-alarm \
  --alarm-name "HighErrorRate" \
  --alarm-description "Alarm when error rate exceeds 5%" \
  --metric-name "SystemErrors" \
  --namespace "Bedrock-AgentCore" \
  --statistic "Sum" \
  --dimensions "Name=Resource,Value=my-gateway-arn" \
  --period 300 \
  --evaluation-periods 1 \
  --threshold 5 \
  --comparison-operator "GreaterThanThreshold" \
  --alarm-actions "arn:aws:sns:us-west-2:123456789012:my-topic"
```

This alarm will trigger when the number of system errors exceeds 5 in a 5-minute period. When the alarm triggers, it will send a notification to the specified SNS topic.

## Provided log data


AgentCore provides logs that help you monitor and troubleshoot key AgentCore gateway resource processes. To enable this log data, you need to create a log destination.

AgentCore can output logs to CloudWatch Logs, Amazon S3, or Firehose stream. If you use a CloudWatch Logs destination, these logs are stored under the default log group `/aws/vendedlogs/bedrock-agentcore/gateway/APPLICATION_LOGS/{gateway_id}` or under a custom log group starting with `/aws/vendedlogs/` . See [Enabling observability for AgentCore runtime, memory, gateway, built-in tools, and identity resources](observability-configure.md#observability-configure-cloudwatch) to learn more.

AgentCore logs the following information for gateway resources:
+ Start and completion of gateway requests processing
+ Error messages for Target configurations
+ MCP Requests with missing or incorrect authorization headers
+ MCP Requests with incorrect request parameters (tools, method)

You can also see request and response bodies as part of your Vended Logs integration when any of the MCP Operations are performed on the Gateway. They can do further analysis on these logs, using the `span_id` and `trace_id` fields to connect the vended spans and logs being emitted. For more information about encrypting your gateways with customer-managed KMS keys, see [Advanced features and topics for Amazon Bedrock AgentCore Gateway](gateway-advanced.md).

Sample log:

```
{
    "resource_arn": "arn:aws:bedrock-agentcore:us-east-1:123456789012:gateway/<gatewayid>",
    "event_timestamp": 1759370851622,
    "body": {
        "isError": false,
        "log": "Started processing request with requestId: 1",
        "requestBody": "{id=1, jsonrpc=2.0, method=tools/call, params={name=target-quick-start-f9scus___LocationTool, arguments={location=seattle}}}",
        "id": "1"
    },
    "account_id": "123456789012",
    "request_id": "12345678-1234-1234-1234-123456789012",
    "trace_id": "160fc209c3befef4857ab1007d041db0",
    "span_id": "81346de89c725310"
}
```

Sample log with response body:

```
{
    "resource_arn": "arn:aws:bedrock-agentcore:us-east-1:123456789012:gateway/<gatewayid>",
    "event_timestamp": 1759370853807,
    "body": {
        "isError": false,
        "responseBody": "{jsonrpc=2.0, id=1, result={isError=false, content=[{type=text, text=\"good\"}]}}",
        "log": "Successfully processed request with requestId: 2",
        "id": "1"
    },
    "account_id": "123456789012",
    "request_id": "12345678-1234-1234-1234-123456789012",
    "trace_id": "160fc209c3befef4857ab1007d041db0",
    "span_id": "81346de89c725310"
}
```

## Provided spans


AgentCore supports OTEL compliant vended spans that you can use to track invocations across different primitives that are being used.

Sample vended Spans for Tool Invocation:
+  `kind:SERVER` - tracks the overall execution details, tool invoked, gateway details, AWS request ID, trace and span ID.
+  `kind:CLIENT` - covers the specific target that was invoked and details around it like target type, target execution time, target execution start and end times, etc.

For other MCP method invocations, only the `kind:SERVER` span is emitted.

While these spans emit metrics, to investigate why a failure occurred for a specific span, a Gateway user must check the logs that are vended. Various fields, for example, `spanId` or `aws.request.id` can help in stitching these spans and logs together.


| Operation | Span attributes | Description | 
| --- | --- | --- | 
|  List Tools  |  aws.operation.name, aws.resource.arn, aws.request.id, aws.account.id, gateway.id, aws.xray.origin, aws.resource.type, aws.region, latency\$1ms, error\$1type, jsonrpc.error.code, http.method, http.response.status\$1code, gateway.name, url.path, overhead\$1latency\$1ms  |  List tools attached to a gateway  | 
|  Call Tool  |  aws.operation.name, aws.resource.arn, aws.request.id, aws.account.id, gateway.id, aws.xray.origin, aws.resource.type, aws.region, latency\$1ms, error\$1type, jsonrpc.error.code, http.method, http.response.status\$1code, gateway.name, url.path, overhead\$1latency\$1ms, tool.name  |  Call a specific tool. Two spans are emmited: 1. `kind:SERVER` which tracks the overall execution details (success / not) , tool invoked, gateway details, AWS request ID, trace and span ID. 2. `kind:CLIENT` which covers the specific target that was invoked and details around it like target type, target execution time, target execution start and end times, etc.  | 
|  Search Tools  |  aws.operation.name, aws.resource.arn, aws.request.id, aws.account.id, gateway.id, aws.xray.origin, aws.resource.type, aws.region, latency\$1ms, error\$1type, jsonrpc.error.code, http.method, http.response.status\$1code, gateway.name, url.path, overhead\$1latency\$1ms, tool.name  |  Search for ten most relevant tools given an input query  | 

# AgentCore generated built-in tools observability data
Built-in tools observability data

**Topics**
+ [

## Observability tools metrics
](#observability-tools-metrics-one)
+ [

## Resource usage metrics and logs
](#observability-tools-resource-usage-metrics-logs)
+ [

## Provided span data
](#observability-tools-span-data)
+ [

## Application log data
](#observability-tools-application-log-data)

## Observability tools metrics


AgentCore provides the following built-in metrics for the code interpreter and browser tools. Built-in tool metrics are batched at one minute intervals. To learn more about AgentCore tools, see [Execute code and analyze data using Amazon Bedrock AgentCore Code Interpreter](code-interpreter-tool.md) and [Interact with web applications using Amazon Bedrock AgentCore Browser](browser-tool.md).

 **Invoke tool:** 

Invocations  
The total number of requests made to the Data Plane API. Each API call counts as one invocation, regardless of the request payload size or response status.

Throttles  
The number of requests throttled by the service due to exceeding allowed TPS (Transactions Per Second) or quota limits. These requests return ThrottlingException with HTTP status code 429.

SystemErrors  
The number of server-side errors encountered during request processing.

UserErrors  
The number of client-side errors resulting from invalid requests. This require user action in order to resolve.

Latency  
The time elapsed between when the service receives the request and when it begins sending the first response token. Important for measuring initial response time.

 **Create tool session:** 

Invocations  
The total number of requests made to the Data Plane API. Each API call counts as one invocation, regardless of the request payload size or response status.

Throttles  
The number of requests throttled by the service due to exceeding allowed TPS (Transactions Per Second) or quota limits. These requests return ThrottlingException with HTTP status code 429.

SystemErrors  
The number of server-side errors encountered during request processing.

UserErrors  
The number of client-side errors resulting from invalid requests. This require user action in order to resolve.

Latency  
The time elapsed between when the service receives the request and when it begins sending the first response token. Important for measuring initial response time.

Duration  
The duration of tool session (Operation becomes CodeInterpreterSession/BrowserSession).

 **Browser user takeover:** 

TakerOverCount  
The total number of user taking over

TakerOverReleaseCount  
The total number of user releasing control

TakerOverDuration  
The duration of user taking over

## Resource usage metrics and logs


Amazon Bedrock AgentCore Built-in Tools provides comprehensive resource usage telemetry, including CPU and memory consumption metrics for your runtime resources.

**Note**  
Resource usage data may be delayed by up to 60 minutes and precision might differ across metrics.

 **Vended metrics** 

By default, Bedrock AgentCore Built-in Tools vends metrics for Account level and Tool level at 1-minute resolution. Amazon CloudWatch aggregation and metric data retention follow standard Amazon CloudWatch data retention polices. For more information, see [https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch\$1concepts.html\$1Metric](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html#Metric).


| Name | Dimensions | Description | 
| --- | --- | --- | 
|  CPUUsed-vCPUHours  |  Service; Service, Resource  |  The total amount of virtual CPU consumed in vCPU-Hours unit, available at the resource and account level. Useful for resource tracking and estimated billing visibility.  | 
|  MemoryUsed-GBHours  |  Service; Service, Resource  |  The total amount of memory consumed in GB-Hours unit, available at the resource and account levels. Useful for resource tracking and estimated billing visibility.  | 

Dimension explanation
+  **Service** - AgentCore.CodeInterpreter or AgentCore.Browser
+  **Resource** - Built-in tool Id

Account level metrics are available in the Amazon CloudWatch Bedrock AgentCore Observability Console under **Built-in Tools** tab. The dashboard on the console will contain a Memory usage graph and a CPU usage graph generated from the usage metrics. These graphs represent the total resource usage across all tools of the selected tool type in the account in the region.

Tool level metrics are available in **Tools** page of the Amazon CloudWatch Bedrock AgentCore Observability Console. The dashboard on the console will contain a Memory usage graph and a CPU usage graph generated from the usage metrics. The graphs represent the total resource usage across all sessions of the selected tool.

**Note**  
Telemetry data is provided for monitoring purposes. Actual billing is calculated based on metered usage data and may differ from telemetry values due to aggregation timing, reconciliation processes, and measurement precision. Refer to your AWS billing statement for authoritative charges.

 **Vended Logs** 

Amazon Bedrock AgentCore Built-in Tools provides the ability to enabled vended logs for session level usage telemetry at 1-second granularity. Each log record contains 1 second Resource Usage datum. Currently supported metrics include:
+ Code Interprefer codeInterpreter.vcpu.hours.used and codeInterpreter.memory.gb\$1hours.used
+ Browser browser.vcpu.hours.used and browser.memory.gb\$1hours.used

Each resource usage datum will use the following schema in the log record.

 **Code Interpreter** 


| Log type | Log fields | Description | 
| --- | --- | --- | 
|  USAGE\$1LOGS  |  event\$1timestamp, resource\$1arn, service.name, cloud.provider, cloud.region, account.id, region, resource.id, session.id, elapsed\$1time\$1seconds, codeInterpreter.vcpu.hours.used, codeInterpreter.memory.gb\$1hours.used  |  Resource Usage Logs for session-level resource tracking.  | 

 **Browser** 


| Log type | Log fields | Description | 
| --- | --- | --- | 
|  USAGE\$1LOGS  |  event\$1timestamp, resource\$1arn, service.name, cloud.provider, cloud.region, account.id, region, resource.id, session.id, elapsed\$1time\$1seconds, browser.vcpu.hours.used, browser.memory.gb\$1hours.used  |  Resource Usage Logs for session-level resource tracking.  | 

For more information about enabling logs, see [Add observability to your Amazon Bedrock AgentCore resources](observability-configure.md) . These logs are theyn displayed in the destination as configured (AWS LogGroup, Amazon S3, or Amazon Kinesis Firehose.

In the Built-in Tools Session page of the Amazon CloudWatch Bedrock AgentCore Observability Console, you can see resource usage metrics generated from these logs. To optimize your metric viewing experience, select your desired time range using the selector in the top right to focus on specific CPU and Memory Usage data.

**Note**  
Telemetry data is provided for monitoring purposes. Actual billing is calculated based on metered usage data and may differ from telemetry values due to aggregation timing, reconciliation processes, and measurement precision. Refer to your AWS billing statement for authoritative charges.

## Provided span data


To enhance observability, AgentCore provides structured spans that provide visibility into built-in tools APIs. To enable this span data, you need to enable observability on your built-in tool resource. See [Add observability to your Amazon Bedrock AgentCore resources](observability-configure.md) for steps and details. This span data is available in full in AWS CloudWatch Logs in the aws/spans log group. The following table defines the operation for which spans are created and the attributes for each captured span.

 **Code interpreter** 


| Operation name | Span attributes | Description | 
| --- | --- | --- | 
|  StartCodeInterpreterSession  |  aws.operation.name, aws.resource.arn, aws.request.id, aws.account.id, toolsession.id, aws.xray.origin, aws.resource.type, aws.region, latency\$1ms, error\$1type  |  Starts a code interpreter session.  | 
|  StopCodeInterpreterSession  |  aws.operation.name, aws.resource.arn, aws.request.id, aws.account.id, toolsession.id, aws.xray.origin, aws.resource.type, aws.region, latency\$1ms, error\$1type, session\$1duration\$1s  |  Stops a code interpreter session.  | 
|  InvokeCodeInterpreter  |  aws.operation.name, aws.resource.arn, aws.request.id, aws.account.id, toolsession.id, aws.xray.origin, aws.resource.type, aws.region, latency\$1ms, error\$1type  |  Invokes a code interpreter with input code.  | 
|  CodeInterpreterSessionExpire  |  aws.operation.name, aws.resource.arn, aws.request.id, aws.account.id, toolsession.id, aws.xray.origin, aws.resource.type, aws.region, latency\$1ms, error\$1type, session\$1duration\$1s  |  Expires a code interpreter session if StopCodeInterpreterSession is not called and the session times out.  | 
+ toolsession.id - the id of the tool session
+ session\$1duration\$1s - the duration of the session in seconds before it ended

 **Browser** 


| Operation name | Span attributes | Description | 
| --- | --- | --- | 
|  StartBrowserSession  |  aws.operation.name, aws.resource.arn, aws.request.id, aws.account.id, toolsession.id, aws.xray.origin, aws.resource.type, aws.region, latency\$1ms, error\$1type  |  Starts a browser session.  | 
|  StopBrowserSession  |  aws.operation.name, aws.resource.arn, aws.request.id, aws.account.id, toolsession.id, aws.xray.origin, aws.resource.type, aws.region, latency\$1ms, error\$1type, session\$1duration\$1s  |  Stops a browsersession.  | 
|  ConnectBrowserAutomationStream  |  aws.operation.name, aws.resource.arn, aws.request.id, aws.account.id, toolsession.id, aws.xray.origin, aws.resource.type, aws.region, latency\$1ms, error\$1type  |  Connect to a browser automation stream.  | 
|  BrowserSessionExpire  |  aws.operation.name, aws.resource.arn, aws.request.id, aws.account.id, toolsession.id, aws.xray.origin, aws.resource.type, aws.region, latency\$1ms, error\$1type, session\$1duration\$1s  |  Expires a code interpreter session if StopBrowserSession is not called and the session times out.  | 

## Application log data


AgentCore provides structured Application logs that help you gain visibility into your agent runtime invocations and session-level resource consumption. This log data is provided when enabling observability on your agent resource. See [Add observability to your Amazon Bedrock AgentCore resources](observability-configure.md) for steps and details. AgentCore can output logs to CloudWatch Logs, Amazon S3, or Firehose stream. If you use a CloudWatch Logs destination, these logs are stored under your agent’s application logs or under your own custom log group.


| Log type | Log fields | Description | 
| --- | --- | --- | 
|  APPLICATION\$1LOGS  |  timestamp, resource\$1arn, event\$1timestamp, account\$1id, request\$1id, session\$1id, trace\$1id, span\$1id, service\$1name, operation, request\$1payload, response\$1payload  |  Application logs for InvokeCodeInterpreter with tracing fields, request, and response payloads  | 

# AgentCore generated identity observability data
Identity observability data

This document described observability data emitted by the Bedrock AgentCore Identity Service. This data provides visibility into the performance, usage, and operational health of the service, allowing your to monitor authorization activities for your AI agents and workloads.

You can use the identity observability data described in this section in the following ways:
+ Monitor usage: track API call volume and throttling events across workload identities and credential providers
+ Track inbound authorization: monitor success/failure rates for workload access token operations
+ Analyze Resource Access Patterns: gain insights into OAuth2 providers and API key usage patterns by provider type and flow
+ Troubleshoot issues: identify and diagnose errors by type, operation, and resource
+ Capacity planning: use metrics to understand usage patterns and plan for scaling

These metrics can be viewed in the Amazon CloudWatch console, retrieved via the Amazon CloudWatch API, or incorporated into Amazon CloudWatch dashboards and alarms for proactive monitoring.

**Topics**
+ [

## Usage, authorization, and resource access metrics
](#observability-identity-usage-auth-ra-metrics)
+ [

## Provided span data
](#observability-identity-span-data)
+ [

## Provided log data
](#observability-identity-log-data)

## Usage, authorization, and resource access metrics


The following dimensions reference applies to the metrics described in this section:
+ WorkloadIdentity: the workload identity name making the request.
+ WorkloadIdentityDirectory: the directory containing the workload identity (typically `default` ).
+ TokenVault: the token vault being accessed (typically `default` ).
+ ProviderName: the name of the credential provider (for example, `MyGoogleProvider` , `MySlackProvider` ).
+ FlowType: the OAuth2 flow type (USER\$1FEDERATION, M2M).
+ ExceptionType: the specific error type (ValidationException, ThrottlingException, etc.)

### Usage metrics


These metrics are emitted in the AWS/Usage namespace and track service usage at the AWS account level.


| Metric name | Dimensions | Description | 
| --- | --- | --- | 
|  CallCount  |  Service, Type, Class, Resource  |  Tracks the number of calls made to Identity Service operations. This can be used for service quotas.  | 
|  ThrottleCount  |  Service, Type, Class, Resource  |  Tracks the number of throttled calls for Identity Service operations.  | 

### Authorization metrics


These metrics are emitted in the AWS/Bedrock-AgentCore namespace and provide insights into authentication and authorization operations.


| Metric name | Dimensions | Description | 
| --- | --- | --- | 
|  WorkloadAccessTokenFetchSuccess  |  WorkloadIdentity, WorkloadIdentityDirectory, Operation  |  Tracks successful workload access token fetch operations.  | 
|  WorkloadAccessTokenFetchFailures  |  WorkloadIdentity, WorkloadIdentityDirectory, Operation, ExceptionType  |  Tracks failed workload access token fetch operations by exception type.  | 
|  WorkloadAccessTokenFetchThrottles  |  WorkloadIdentity, WorkloadIdentityDirectory, Operation  |  Tracks throttled workload access token fetch operations.  | 

### Resource access metrics


These metrics track credential provider operations for accessing external resources.


| Metric name | Dimensions | Description | 
| --- | --- | --- | 
|  ResourceAccessTokenFetchSuccess  |  WorkloadIdentity, WorkloadIdentityDirectory, TokenVault, ProviderName, Type  |  Tracks successful OAuth2 token fetch operations from credential providers.  | 
|  ResourceAccessTokenFetchFailures  |  WorkloadIdentity, WorkloadIdentityDirectory, TokenVault, ProviderName, Type, ExceptionType  |  Tracks failed OAuth2 token fetch operations by exception type.  | 
|  ResourceAccessTokenFetchThrottles  |  WorkloadIdentity, WorkloadIdentityDirectory, TokenVault, ProviderName, Type  |  Tracks throttled OAuth2 token fetch operations.  | 
|  ApiKeyFetchSuccess  |  WorkloadIdentity, WorkloadIdentityDirectory, TokenVault, ProviderName  |  Tracks successful API key fetch operations.  | 
|  ApiKeyFetchFailures  |  WorkloadIdentity, WorkloadIdentityDirectory, TokenVault, ProviderName, ExceptionType  |  Tracks failed API key fetch operations by exception type.  | 
|  ApiKeyFetchThrottles  |  WorkloadIdentity, WorkloadIdentityDirectory, TokenVault, ProviderName  |  Tracks throttled API key fetch operations.  | 

## Provided span data


To enhance observability, AgentCore Identity provides structured spans that give visibility into identity service operations. To enable span data, you need to enable observability on your workload identity or credential provider resource.

This span data is available in Amazon CloudWatch Logs aws/spans log group. The following table defines operations for which spans are created and their attributes.

The following attribute explanations apply to the information in the tables below:
+ aws.operation.name - the operation name being performed
+ aws.resource.arn - the Amazon Resource Name for the identity resource
+ aws.request\$1id - unique request ID for the operation
+ aws.account.id - user’s AWS account ID
+ workload.identity.id - the workload identity name
+ workload.identity.directory - the workload identity directory
+ credential.provider.name - name of the credential provider
+ credential.provider.type - type of credential provider (OAuth2, API Key)
+ token.vault.name - token vault name
+ oauth2.flow - OAuth2 flow type (USER\$1FEDERATION, M2M)
+ latency\$1ms - operation latency in milliseconds
+ error\$1type - error classification (throttle, system, user, null if successful)
+ aws.region - AWS region where the operation occurred

 **Workload Identity Operations** 


| Operation | Span attributes | Description | 
| --- | --- | --- | 
|  GetWorkloadAccessToken  |  aws.operation.name, aws.resource.arn, aws.request\$1id, aws.account.id, workload.identity.id, workload.identity.directory, aws.region, latency\$1ms, error\$1type  |  Fetches workload access token for machine-to-machine authentication  | 
|  GetWorkloadAccessTokenForJWT  |  aws.operation.name, aws.resource.arn, aws.request\$1id, aws.account.id, workload.identity.id, workload.identity.directory, issuer, user\$1sub, aws.region, latency\$1ms, error\$1type  |  Fetches workload access token using JWT user token  | 
|  GetWorkloadAccessTokenForUserId  |  aws.operation.name, aws.resource.arn, aws.request\$1id, aws.account.id, workload.identity.id, workload.identity.directory, aws.region, latency\$1ms, error\$1type  |  Fetches workload access token for specific user ID  | 

 **Credential Provider Operations** 


| Operation | Span attributes | Description | 
| --- | --- | --- | 
|  GetResourceOAuth2Token  |  aws.operation.name, aws.resource.arn, aws.request\$1id, aws.account.id, workload.identity.id, credential.provider.name, credential.provider.type, token.vault.name, oauth2.flow, aws.region, latency\$1ms, error\$1type  |  Fetches OAuth2 access token from credential provider  | 
|  GetResourceAPIKey  |  aws.operation.name, aws.resource.arn, aws.request\$1id, aws.account.id, workload.identity.id, credential.provider.name, token.vault.name, aws.region, latency\$1ms, error\$1type  |  Fetches API key from credential provider  | 

## Provided log data


AgentCore Identity provides structured application logs that help you gain visibility into identity service operations. This log data is provided when enabling observability on your identity resources.

AgentCore can output logs to Amazon CloudWatch Logs, Amazon S3, or Amazon Kinesis Firehose stream. If you use a CloudWatch Logs destination, these logs are stored under your resource’s application logs or under your own custom log group.


| Log type | Log fields | Description | 
| --- | --- | --- | 
|  Application Logs  |  timestamp, resource\$1arn, event\$1timestamp, account\$1id, request\$1id, trace\$1id, span\$1id, service\$1name, operation, request\$1payload, response\$1payload  |  Application logs for Identity Service operations with tracing fields, request, and response payloads  | 

Log field explanations:
+ timestamp - Unix timestamp of the log event
+ resource\$1arn - ARN of the identity resource
+ event\$1timestamp - ISO 8601 timestamp string
+ account\$1id - AWS account ID
+ request\$1id - unique request identifier
+ trace\$1id - distributed tracing ID
+ span\$1id - span identifier for the operation
+ service\$1name - service name (BedrockAgentCore.Identity)
+ operation - pperation name (GetWorkloadAccessToken, etc.)
+ request\$1payload - request payload
+ response\$1payload - response payload

# AgentCore generated Policy in AgentCore observability data
Policy in AgentCore observability data

For policy and policy engine resource types, Amazon Bedrock AgentCore publishes invocation metrics to CloudWatch by default. Additional span data is available when traces are enabled for the attached AgentCore Gateway resource, which will emit spans for Policy in AgentCore related operations. See [Enabling observability for AgentCore runtime, memory, gateway, built-in tools, and identity resources](observability-configure.md#observability-configure-cloudwatch) to learn more about enablement.

**Topics**
+ [

## Provided metric data
](#observability-policy-metrics-provided)
+ [

## Provided span data
](#observability-policy-spans)

## Provided metric data


Amazon Bedrock AgentCore publishes the following invocation metrics by default to the `Bedrock-Agentcore` CloudWatch namespace. These metrics can be used to observe and monitor policy evaluations and overall performance.


| Metric | Description | Unit | 
| --- | --- | --- | 
|  Invocations  |  Number of requests made to the service  |  Count  | 
|  SystemErrors  |  Number of server-side errors (5xx)  |  Count  | 
|  UserErrors  |  Number of client-side errors (4xx)  |  Count  | 
|  Latency  |  Total time elapsed from sending a request to receiving a response  |  Milliseconds  | 
|  AllowDecisions  |  Number of decisions that resulted in ALLOW  |  Count  | 
|  DenyDecisions  |  Number of decisions that resulted in DENY  |  Count  | 
|  TotalMismatchedPolicies  |  Number of failed policies for a given request due to either missing attribute or type mismatch  |  Count  | 
|  PolicyMismatch  |  Number of failures for a specific policy caused by missing attribute or type mismatch  |  Count  | 
|  MismatchErrors  |  Number of requests that failed due to at least one mismatched policy  |  Count  | 
|  DeterminingPolicies  |  Number of determining policies for a request  |  Count  | 
|  NoDeterminingPolicies  |  Number of requests denied due to no determining policies  |  Count  | 

### Metric Dimensions


The following dimensions are available for the above metrics. These dimensions allow you to filter and analyze metric data at finer levels of detail.


| Dimension | Description | 
| --- | --- | 
|  OperationName  |  The name of the API operation, valid values are `AuthorizeAction` and `PartiallyAuthorizeActions`   | 
|  PolicyEngine  |  The Policy Engine identifier associated with the metric  | 
|  Policy  |  The Policy identifier associated with the metric  | 
|  TargetResource  |  The AgentCore Gateway resource identifier associated with the request  | 
|  ToolName  |  The name of the tool the metric applies to  | 
|  Mode  |  The enforcement mode configured on the AgentCore Gateway, valid values are `LOG_ONLY` and `ENFORCE`   | 

## Provided span data


Amazon Bedrock AgentCore provides additional structured span data through AgentCore Gateway observability, offering deeper insights into API invocations. Policy in AgentCore span data is available after enabling traces for your AgentCore Gateway resource and can be found in CloudWatch `aws/spans` log group.


| Operation | Span Attribute | Description | 
| --- | --- | --- | 
|  AuthorizeAction  |  aws.agentcore.policy.authorization\$1decision  |  The authorization decision after evaluating policies, valid values are `ALLOW` and `DENY`   | 
|  |  aws.agentcore.policy.authorization\$1reason  |  Reason for the authorization decision  | 
|  |  aws.agentcore.policy.determining\$1policies  |  List of Policy identifiers that determined the decision outcome  | 
|  |  aws.agentcore.policy.mismatched\$1policies  |  List of Policy identifiers that failed due to missing attributes or type mismatches  | 
|  |  aws.agentcore.policy.target\$1resource.id  |  AgentCore Gateway resource identifier the request applies to  | 
|  |  aws.agentcore.gateway.policy.arn  |  Policy Engine Amazon Resource Name (ARN) configured on the AgentCore Gateway  | 
|  |  aws.agentcore.gateway.policy.mode  |  Policy Engine enforcement mode configured on the AgentCore Gateway, valid values are `LOG_ONLY` and `ENFORCE`   | 
|  PartiallyAuthorizeActions  |  aws.agentcore.policy.allowed\$1tools  |  List of tool names that evaluated to an `ALLOW` decision  | 
|  |  aws.agentcore.policy.denied\$1tools  |  List of tool names that evaluated to a `DENY` decision  | 
|  |  aws.agentcore.policy.target\$1resource.id  |  AgentCore Gateway resource identifier the request applies to  | 
|  |  aws.agentcore.gateway.policy.arn  |  Policy Engine Amazon Resource Name (ARN) configured on the AgentCore Gateway  | 
|  |  aws.agentcore.gateway.policy.mode  |  Policy Engine enforcement mode configured on the AgentCore Gateway, valid values are `LOG_ONLY` and `ENFORCE`   | 

# View observability data for your Amazon Bedrock AgentCore agents
View metrics for your agents

After implementing observability in your agent, you can view the collected metrics and traces in both the CloudWatch console generative AI observability page and in CloudWatch Logs. Refer to the following sections to learn how to view metrics for your agents.

## View data using generative AI observability in Amazon CloudWatch


The CloudWatch generative AI observability page displays all of the service-provided metrics output by the AgentCore agent runtime, as well as span- and trace-derived data if you have enabled instrumentation in your agent code. To view the observability dashboard in CloudWatch, open the [Amazon CloudWatch GenAi Observability](https://console.aws.amazon.com/cloudwatch/home#gen-ai-observability) page.

With generative AI observability in CloudWatch, you can view tailored dashboards with graphs and other visualizations of your data, as well as error breakdowns, trace visualizations and more. To learn more about using generative AI observability in CloudWatch, including how to look at your agents' individual session and trace data, see [Amazon Bedrock AgentCore agents](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AgentCore-Agents.html) in the *Amazon CloudWatch user guide*.

## View other data in CloudWatch


All of the service-provided metrics and spans can also be viewed in CloudWatch, along with any metrics that your instrumented agent code outputs.

To view this data, refer to the following sections.

### Logs


1. Open the [CloudWatch](https://console.aws.amazon.com/cloudwatch/home) console.

1. In the left hand navigation pane, expand **Logs** and select **Log groups** 

1. Use the search field to find the log group for your agent, memory, or gateway resource.

   AgentCore agent log groups have the following format:
   +  **Standard logs** - stdout/stderr output
     +  **Location** : /aws/bedrock-agentcore/runtimes/<agent\$1id>-<endpoint\$1name>/[runtime-logs] <UUID>
     +  **Contains** : Runtime errors, application logs, debugging statements
     +  **Example Usage** :
       + print("Processing request…​") \$1 Appears in standard logs
       + logging.info("Request processed successfully") \$1 Appears in standard logs
   +  **OTEL structured logs** - Detailed operation information
     +  **Location** : /aws/bedrock-agentcore/runtimes/<agent\$1id>-<endpoint\$1name>/otel-rt-logs
     +  **Contains** : Execution details, error tracking, performance data
     +  **Automatic collection** : No additional code required - generated by ADOT instrumentation
     +  **Benefits** : Can include correlation IDs linking logs to relevant traces

### Traces and Spans


Traces provide visibility into request execution paths through your agent:
+ Location: `/aws/spans/default` 
+ Access via: CloudWatch Transaction Search console
+ Requirements: CloudWatch Transaction Search must be enabled

Traces automatically capture:
+ Agent invocation sequences
+ Integration with framework components (LangChain, etc.)
+ LLM calls and responses
+ Tool invocations and results
+ Error paths and exceptions

For distributed tracing across services, you can use standard HTTP headers:
+  AWS X-Ray format: `X-Amzn-Trace-Id: Root=1-5759e988-bd862e3fe1be46a994272793;Parent=53995c3f42cd8ad8;Sampled=1` 
+ W3C format: `traceparent: 00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01` 

To view traces:
+ Navigate to CloudWatch console
+ Select **Transaction Search** from the left navigation
+ Filter by service name or other criteria
+ Select a trace to view the detailed execution graph

### Metrics


If you have enabled observability by instrumenting your agent code as described in [Enabling AgentCore observability](observability-configure.md#observability-configure-builtin) , your agent automatically generates OTEL metrics, which are sent to CloudWatch using Enhanced Metric Format (EMF):
+ Namespace: `bedrock-agentcore` 
+ Access via: CloudWatch Metrics console
+ Contents: Custom metrics generated by your agent code and frameworks
+ Automatic collection: No additional code required - generated by ADOT instrumentation

In addition to agent-emitted metrics, the AgentCore service publishes standard service metrics to CloudWatch. Refer to [AgentCore generated runtime observability data](observability-runtime-metrics.md) for a list of these metrics.

# Monitor AgentCore resources across accounts
Cross-account monitoring

You can use Amazon CloudWatch cross-account observability to monitor Amazon Bedrock AgentCore resources across multiple AWS accounts from a single monitoring account. This enables you to view agent metrics, traces, sessions, and resource data from source accounts without switching between accounts.

When cross-account observability is enabled, the AgentCore Observability console in your monitoring account automatically displays data from all linked source accounts alongside your local account data.

## Prerequisites


Before you can monitor AgentCore resources across accounts, you must complete the following:
+  **Set up a monitoring account** – Configure a central AWS account as your monitoring account in CloudWatch Settings. For instructions, see [CloudWatch cross-account observability](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account.html).
+  **Link source accounts** – Link one or more source accounts to your monitoring account using AWS Organizations or individual account linking. Source accounts must share the required telemetry types (Metrics and Logs).
+  **Deploy AgentCore resources** – Ensure your AgentCore agents, gateways, memory, identity, and built-in tool resources are deployed in the source accounts with observability enabled.

## How to set up cross-account monitoring


### Step 1: Configure the monitoring account

+ Open the [CloudWatch console](https://console.aws.amazon.com/cloudwatch/).
+ In the left navigation pane, choose **Settings**.
+ In the **Monitoring account configuration** section, choose **Configure**.
+ Select the telemetry types to share:
  + At minimum, select **Metrics** and **Logs** to enable AgentCore cross-account observability.
+ Complete the monitoring account setup wizard.

### Step 2: Link source accounts


Link your source accounts to the monitoring account using one of the following methods:
+  ** AWS Organizations** (recommended) – Automatically links all accounts in your organization or organizational unit. New accounts are onboarded automatically.
+  **Individual account linking** – Use a CloudFormation template or URL to link specific accounts.

When configuring source accounts, ensure the same telemetry types selected in the monitoring account are also enabled in the source account.

For detailed instructions, see [Link monitoring accounts with source accounts](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account-Setup.html).

### Step 3: View cross-account data in AgentCore Observability

+ Open the [AgentCore Observability console](https://console.aws.amazon.com/cloudwatch/home#gen-ai-observability) in your monitoring account.
+ The console automatically displays data from all linked source accounts.

## Set up cross-account monitoring using infrastructure as code


You can use AWS CloudFormation to configure cross-account observability programmatically using [CloudWatch Observability Access Manager (OAM)](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account.html) resources.

For the required IAM permissions to create sinks and links, see [Necessary permissions](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account-Setup.html#Unified-Cross-Account-permissions-setup).

### Monitoring account: Create a sink


In your monitoring account, create an OAM sink that accepts telemetry from source accounts.

You can scope the sink policy in one of the following ways:
+  **By organization (recommended)** – Use `aws:PrincipalOrgID` to allow all accounts in your AWS Organizations organization. This is the simplest approach and automatically includes new accounts added to the organization.
+  **By individual account IDs** – List specific source account IDs as principals. Use this approach if you need fine-grained control over which accounts can link.

 **Option 1: Allow all accounts in an organization** 

Replace `<your-org-id>` with your AWS Organizations organization ID (for example, `o-a1b2c3d4e5`).

```
AWSTemplateFormatVersion: '2010-09-09'
Description: OAM Sink for cross-account AgentCore Observability (organization-wide)

Resources:
  ObservabilitySink:
    Type: AWS::Oam::Sink
    Properties:
      Name: AgentCoreObservabilitySink
      Policy:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal: '*'
            Action:
              - 'oam:CreateLink'
              - 'oam:UpdateLink'
            Resource: '*'
            Condition:
              StringEquals:
                aws:PrincipalOrgID: '<your-org-id>'
              ForAllValues:StringEquals:
                oam:ResourceTypes:
                  - 'AWS::Logs::LogGroup'
                  - 'AWS::CloudWatch::Metric'
      Tags:
        Purpose: AgentCoreObservability

Outputs:
  SinkArn:
    Value: !GetAtt ObservabilitySink.Arn
    Description: Share this ARN with source accounts to create links
```

 **Option 2: Allow specific source accounts** 

Replace `<source-account-id-1>` and `<source-account-id-2>` with the AWS account IDs of your source accounts.

```
AWSTemplateFormatVersion: '2010-09-09'
Description: OAM Sink for cross-account AgentCore Observability (specific accounts)

Resources:
  ObservabilitySink:
    Type: AWS::Oam::Sink
    Properties:
      Name: AgentCoreObservabilitySink
      Policy:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              AWS:
                - '<source-account-id-1>'
                - '<source-account-id-2>'
            Action:
              - 'oam:CreateLink'
              - 'oam:UpdateLink'
            Resource: '*'
            Condition:
              ForAllValues:StringEquals:
                oam:ResourceTypes:
                  - 'AWS::Logs::LogGroup'
                  - 'AWS::CloudWatch::Metric'
      Tags:
        Purpose: AgentCoreObservability

Outputs:
  SinkArn:
    Value: !GetAtt ObservabilitySink.Arn
    Description: Share this ARN with source accounts to create links
```

### Source account: Create a link


In each source account, create an OAM link to the monitoring account’s sink. Replace `<sink-arn-from-monitoring-account>` with the sink ARN from the previous step.

```
AWSTemplateFormatVersion: '2010-09-09'
Description: OAM Link for cross-account AgentCore Observability

Resources:
  ObservabilityLink:
    Type: AWS::Oam::Link
    Properties:
      LabelTemplate: '$AccountName'
      ResourceTypes:
        - 'AWS::Logs::LogGroup'
        - 'AWS::CloudWatch::Metric'
      SinkIdentifier: '<sink-arn-from-monitoring-account>'
      Tags:
        Purpose: AgentCoreObservability
```

To deploy this link across all member accounts in your organization, use AWS CloudFormation StackSets. For instructions, see [Link monitoring accounts with source accounts](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account-Setup.html).

For more information about OAM resources, see the [AWS CloudFormation OAM resource reference](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/AWS_Oam.html).

## Filtering cross-account data


You can filter data by account in the sessions and traces tables:
+ Use the property filter in the table.
+ Select **Account ID** as the filter property.
+ Enter the source account ID to filter results to a specific account.

## Limitations

+  **Cross-account resource actions** – Some actions are unavailable for cross-account resources, such as navigating to the Bedrock console for resource details. You must sign in to the source account directly to perform these actions.
+  **OAM link required** – Cross-account data is only visible while the OAM link between the monitoring and source accounts is active. If the link is removed, cross-account data will no longer appear.
+  **Telemetry types** – Both the monitoring account and source account must have Metrics and Logs enabled for full AgentCore observability. If only a subset is shared, some data may be missing.
+  **Regional** – Cross-account observability works within a single AWS Region. The monitoring account and source accounts must be in the same Region.

## Related resources

+  [CloudWatch cross-account observability](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Cross-Account.html) 
+  [Get started with AgentCore Observability](observability-get-started.md) 
+  [View observability data for your Amazon Bedrock AgentCore agents](observability-view.md) 
+  [Observability Access Manager API Reference](https://docs.aws.amazon.com/OAM/latest/APIReference/Welcome.html) 