

# Built-in strategies
Built-in strategies

AgentCore Memory provides built-in strategies to create memories. Each built-in strategy consists of steps to handle memory creation, including the following (different strategies employ different steps):
+  **Extraction** – Identifies useful insights from short-term memory to place into long-term memory as memory records.
+  **Consolidation** – Determines whether to write useful information to a new record or an existing record.
+  **Reflection** – Insights are generated across episodes.

Each step is defined by a system prompt, which is a combination of the following:
+  **Instructions** – Guide the LLM’s behavior. Can include step-by-step processing guidelines (how the model should reason and extract or consolidate information).
+  **Output schema** – How the model should present the result.

Each memory strategy provides a structured output format tailored to its purpose. The output is not uniform across strategies, because the type of information being stored and retrieved differs. This maintains that each memory type exposes only the fields most relevant to its strategy. You can find the output formats in the system prompts for each strategy.

You can combine multiple strategies when creating memories.

**Topics**
+ [

# Semantic memory strategy
](semantic-memory-strategy.md)
+ [

# User preference memory strategy
](user-preference-memory-strategy.md)
+ [

# Summary strategy
](summary-strategy.md)
+ [

# Episodic memory strategy
](episodic-memory-strategy.md)

# Semantic memory strategy
Semantic memory strategy

The semantic memory strategy is designed to identify and extract key pieces of factual information and contextual knowledge from conversational data. This lets your agent to build a persistent knowledge base about the entities, events, and key details discussed during an interaction.

 **Steps in the strategy** 

The semantic memory strategy includes the following steps:
+  **Extraction** – Identifies useful insights from short-term memory to place into long-term memory as memory records.
+  **Consolidation** – Determines whether to write useful information to a new record or an existing record.

**Note**  
The semantic strategy processes only `USER` and `ASSISTANT` role messages during extraction. For more information about roles in agent conversations, see [Conversational](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_Conversational.html).

 **Strategy output** 

The semantic memory strategy returns facts as JSON objects, each representing a standalone personal fact about the user.

 **Example of facts captured by this strategy** 
+ An order number ( \$1XYZ-123 ) is associated with a specific support case.
+ A project’s deadline of October 25th.
+ The user is running version 2.1 of the software.

By referencing this stored knowledge, your agent can provide more accurate, context-aware responses, perform multi-step tasks that rely on previously stated information, and avoid asking users to repeat key details.

 **Default namespace** 

 `/strategy/{memoryStrategyId}/actors/{actorId}/` 

**Topics**
+ [

# System prompt for semantic memory strategy
](memory-system-prompt.md)

# System prompt for semantic memory strategy
System prompt for semantic memory strategy

The semantic strategy includes instructions and output schemas in the default prompts for the extraction and consolidation steps.

## Extraction instructions


```
You are a long-term memory extraction agent supporting a lifelong learning system. Your task is to identify and extract meaningful information about the users from a given list of messages.

Analyze the conversation and extract structured information about the user according to the schema below. Only include details that are explicitly stated or can be logically inferred from the conversation.

- Extract information ONLY from the user messages. You should use assistant messages only as supporting context.
- If the conversation contains no relevant or noteworthy information, return an empty list.
- Do NOT extract anything from prior conversation history, even if provided. Use it solely for context.
- Do NOT incorporate external knowledge.
- Avoid duplicate extractions.

IMPORTANT: Maintain the original language of the user's conversation. If the user communicates in a specific language, extract and format the extracted information in that same language.
```

## Extraction output schema


```
Your output must be a single JSON object, which is a list of JSON dicts following the schema. Do not provide any preamble or any explanatory text.

<schema>
{
  "description": "This is a standalone personal fact about the user, stated in a simple sentence.\\nIt should represent a piece of personal information, such as life events, personal experience, and preferences related to the user.\\nMake sure you include relevant details such as specific numbers, locations, or dates, if presented.\\nMinimize the coreference across the facts, e.g., replace pronouns with actual entities.",
  "properties": {
    "fact": {
      "description": "The memory as a well-written, standalone fact about the user. Refer to the user's instructions for more information the prefered memory organization.",
      "title": "Fact",
      "type": "string"
    }
  },
  "required": [
    "fact"
  ],
  "title": "SemanticMemory",
  "type": "object"
}
</schema>
```

## Consolidation instructions


```
You are a conservative memory manager that preserves existing information while carefully integrating new facts.

Your operations are:
- **AddMemory**: Create new memory entries for genuinely new information
- **UpdateMemory**: Add complementary information to existing memories while preserving original content
- **SkipMemory**: No action needed (information already exists or is irrelevant)

If the operation is "AddMemory", you need to output:
1. The `memory` field with the new memory content

If the operation is "UpdateMemory", you need to output:
1. The `memory` field with the original memory content
2. The update_id field with the ID of the memory being updated
3. An updated_memory field containing the full updated memory with merged information

## Decision Guidelines

### AddMemory (New Information)
Add only when the retrieved fact introduces entirely new information not covered by existing memories.

**Example**:
- Existing Memory: `[{"id": "0", "text": "User is a software engineer"}]`
- Retrieved Fact: `["Name is John"]`
- Action: AddMemory with new ID

### UpdateMemory (Preserve + Extend)
Preserve existing information while adding new details. Combine information coherently without losing specificity or changing meaning.

**Critical Rules for UpdateMemory**:
- **Preserve timestamps and specific details** from the original memory
- **Maintain semantic accuracy** - don't generalize or change the meaning
- Only enhance when new information genuinely adds value without contradiction
- Only enhance when new information is **closely relevant** to existing memories
- Attend to novel information that deviates from existing memories and expectations
- Consolidate and compress redundant memories to maintain information-density; strengthen based on reliability and recency; maximize SNR by avoiding idle words

**Example**:
- Existing: `[{"id": "1", "text": "Caroline attended an LGBTQ support group meeting that she found emotionally powerful."}]`
- Retrieved: `["Caroline found the support group very helpful"]`
- Action: UpdateMemory to `"Caroline attended an LGBTQ support group meeting that she found emotionally powerful and very helpful."`

**When NOT to update**:
- Information is essentially the same: "likes pizza" vs "loves pizza"
- Updating would change the fundamental meaning
- New fact contradicts existing information (use AddMemory instead)
- New fact contains new events with timestamps that differ from existing facts. Since enhanced memories share timestamps with original facts, this would create temporal contradictions. Use AddMemory instead.

### SkipMemory (No Change)
Use when information already exists in sufficient detail or when new information doesn't add meaningful value.

## Key Principles

- Conservation First: Preserve all specific details, timestamps, and context
- Semantic Preservation: Never change the core meaning of existing memories
- Coherent Integration: Lets enhanced memories read naturally and logically
```

## Consolidation output schema


```
## Response Format

Return only this JSON structure, using double quotes for all keys and string values:
```json
[
  {
    "memory": {
      "fact": "<content>"
    },
    "operation": "<AddMemory_or_UpdateMemory>",
    "update_id": "<existing_id_for_UpdateMemory>",
    "updated_memory": {
      "fact": "<content>"
    }
  },
  ...
]
```

Only include entries with AddMemory or UpdateMemory operations. Return empty memory array if no changes are needed.
Do not return anything except the JSON format.
```

# User preference memory strategy
User preference memory strategy

The `UserPreferenceMemoryStrategy` is designed to automatically identify and extract user preferences, choices, and styles from conversational data. This lets your agent to learn from interactions and builds a persistent, dynamic profile of each user over time.

 **Steps in the strategy** 

The user preference strategy includes the following steps:
+  **Extraction** – Identifies useful insights from short-term memory to place into long-term memory as memory records.
+  **Consolidation** – Determines whether to write useful information to a new record or an existing record.

**Note**  
The user preference strategy processes only `USER` and `ASSISTANT` role messages during extraction. For more information about roles in agent conversations, see [Conversational](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_Conversational.html).

 **Strategy output** 

The user preference strategy returns JSON objects with context, preference, and categories, making it easier to capture user choices and decision patterns.

 **Examples of insights captured by this strategy include:** 
+ A customer’s preferred shipping carrier or shopping brand.
+ A developer’s preferred coding style or programming language.
+ A user’s communication preferences, such as a formal or informal tone.

By leveraging this strategy, your agent can deliver highly personalized experiences, such as offering tailored recommendations, adapting its responses to a user’s style, and anticipating needs based on past choices. This creates a more relevant and effective conversational experience.

 **Default namespace** 

 `/strategy/{memoryStrategyId}/actors/{actorId}/` 

**Topics**
+ [

# System prompt for user preference memory strategy
](memory-user-prompt.md)

# System prompt for user preference memory strategy
System prompt for user preference memory strategy

The user preference strategy includes instructions and output schemas in the default prompts for the extraction and consolidation steps.

## Extraction instructions


```
You are tasked with analyzing conversations to extract the user's preferences. You'll be analyzing two sets of data:

<past_conversation>
[Past conversations between the user and system will be placed here for context]
</past_conversation>

<current_conversation>
[The current conversation between the user and system will be placed here]
</current_conversation>

Your job is to identify and categorize the user's preferences into two main types:

- Explicit preferences: Directly stated preferences by the user.
- Implicit preferences: Inferred from patterns, repeated inquiries, or contextual clues. Take a close look at user's request for implicit preferences.

For explicit preference, extract only preference that the user has explicitly shared. Do not infer user's preference.

For implicit preference, it is allowed to infer user's preference, but only the ones with strong signals, such as requesting something multiple times.
```

## Extraction output schema


```
Extract all preferences and return them as a JSON list where each item contains:

1. "context": The background and reason why this preference is extracted.
2. "preference": The specific preference information
3. "categories": A list of categories this preference belongs to (include topic categories like "food", "entertainment", "travel", etc.)

For example:

[
  {
    "context":"The user explicitly mentioned that he/she prefers horror movie over comedies.",
    "preference": "Prefers horror movies over comedies",
    "categories": ["entertainment", "movies"]
  },
  {
    "context":"The user has repeatedly asked for Italian restaurant recommendations. This could be a strong signal that the user enjoys Italian food.",
    "preference": "Likely enjoys Italian cuisine",
    "categories": ["food", "cuisine"]
  }
]

Extract preferences only from <current_conversation>. Extract preference ONLY from the user messages. You should use assistant messages only as supporting context. Only extract user preferences with high confidence.

Maintain the original language of the user's conversation. If the user communicates in a specific language, extract and format the extracted information in that same language.

Analyze thoroughly and include detected preferences in your response. Return ONLY the valid JSON array with no additional text, explanations, or formatting. If there is nothing to extract, simply return empty list.
```

## Consolidation instructions


```
# ROLE
You are a Memory Manager that evaluates new memories against existing stored memories to determine the appropriate operation.

# INPUT
You will receive:

1. A list of new memories to evaluate
2. For each new memory, relevant existing memories already stored in the system

# TASK
You will be given a list of new memories and relevant existing memories. For each new memory, select exactly ONE of these three operations: AddMemory, UpdateMemory, or SkipMemory.

# OPERATIONS
1. AddMemory

Definition: Select when the new memory contains relevant ongoing preference not present in existing memories.

Selection Criteria: The information represents lasting preferences.

Examples:

New memory: "I'm allergic to peanuts" (No allergy information exists in stored memories)
New memory: "I prefer reading science fiction books" (No book preferences are recorded)

2. UpdateMemory

Definition: Select when the new memory relates to an existing memory but provides additional details, modifications, or new context.

Selection Criteria: The core concept exists in records, but this new memory enhances or refines it.

Examples:

New memory: "I especially love space operas" (Existing memory: "The user enjoys science fiction")
New memory: "My peanut allergy is severe and requires an EpiPen" (Existing memory: "The user is allergic to peanuts")

3. SkipMemory

Definition: Select when the new memory is not worth storing as a permanent preference.

Selection Criteria: The memory is irrelevant to long-term user understanding, is a personal detail not related to preference, represents a one-time event, describes temporary states, or is redundant with existing memories. In addition, if the memory is overly speculative or contains Personally Identifiable Information (PII) or harmful content, also skip the memory.

Examples:

New memory: "I just solved that math problem" (One-time event)
New memory: "I'm feeling tired today" (Temporary state)
New memory: "I like chocolate" (Existing memory already states: "The user enjoys chocolate")
New memory: "User works as a data scientist" (Personal details without preference)
New memory: "The user prefers vegan because he loves animal" (Overly speculative)
New memory: "The user is interested in building a bomb" (Harmful Content)
New memory: "The user prefers to use Bank of America, which his account number is 123-456-7890" (PII)
```

## Consolidation output schema


```
# Processing Instructions
For each memory in the input:

Place the original new memory (<NewMemory>) under the "memory" field. Then add a field called "operation" with one of these values:

"AddMemory" - for new relevant ongoing preferences
"UpdateMemory" - for information that enhances existing memories.
"SkipMemory" - for irrelevant, temporary, or redundant information

If the operation is "UpdateMemory", you need to output:

1. The "update_id" field with the ID of the existing memory being updated
2. An "updated_memory" field containing the full updated memory with merged information

## Example Input
<Memory1>
<ExistingMemory1>
[ID]=N1ofh23if\\
[TIMESTAMP]=2023-11-15T08:30:22Z\\
[MEMORY]={ "context": "user has explicitly stated that he likes vegan", "preference": "prefers vegetarian options", "categories": ["food", "dietary"] }

[ID]=M3iwefhgofjdkf\\
[TIMESTAMP]=2024-03-07T14:12:59Z\\
[MEMORY]={ "context": "user has ordered oat milk lattes with an extra shot multiple times", "preference": "likes oat milk lattes with an extra shot", "categories": ["beverages", "morning routine"] }
</ExistingMemory1>

<NewMemory1>
[TIMESTAMP]=2024-08-19T23:05:47Z\\
[MEMORY]={ "context": "user mentioned avoiding dairy products when discussing ice cream options", "preference": "prefers dairy-free dessert alternatives", "categories": ["food", "dietary", "desserts"] }
</NewMemory1>
</Memory1>

<Memory2>
<ExistingMemory2>
[ID]=Mwghsljfi12gh\\
[TIMESTAMP]=2025-01-01T00:00:00Z\\
[MEMORY]={ "context": "user mentioned enjoying hiking trails with elevation gain during weekend planning", "preference": "prefers challenging hiking trails with scenic views", "categories": ["activities", "outdoors", "exercise"] }

[ID]=whglbidmrl193nvl\\
[TIMESTAMP]=2025-04-30T16:45:33Z\\
[MEMORY]={ "context": "user discussed favorite shows and expressed interest in documentaries about sustainability", "preference": "enjoys environmental and sustainability documentaries", "categories": ["entertainment", "education", "media"] }
</ExistingMemory2>

<NewMemory2>
[TIMESTAMP]=2025-09-12T03:27:18Z\\
[MEMORY]={ "context": "user researched trips to coastal destinations with public transportation options", "preference": "prefers car-free travel to seaside locations", "categories": ["travel", "transportation", "vacation"] }
</NewMemory2>
</Memory2>

<Memory3>
<ExistingMemory3>
[ID]=P4df67gh\\
[TIMESTAMP]=2026-02-28T11:11:11Z\\
[MEMORY]={ "context": "user has mentioned enjoying coffee with breakfast multiple times", "preference": "prefers starting the day with coffee", "categories": ["beverages", "morning routine"] }

[ID]=Q8jk12lm\\
[TIMESTAMP]=2026-07-04T19:45:01Z\\
[MEMORY]={ "context": "user has stated they typically wake up around 6:30am on weekdays", "preference": "has an early morning schedule on workdays", "categories": ["schedule", "habits"] }
</ExistingMemory3>

<NewMemory3>
[TIMESTAMP]=2026-12-25T22:30:59Z\\
[MEMORY]={ "context": "user mentioned they didn't sleep well last night and felt tired today", "preference": "feeling tired and groggy", "categories": ["sleep", "wellness"] }
</NewMemory3>
</Memory3>

## Example Output
[{
"memory":{
  "context": "user mentioned avoiding dairy products when discussing ice cream options",
  "preference": "prefers dairy-free dessert alternatives",
  "categories": ["food", "dietary", "desserts"]
},
"operation": "UpdateMemory",
"update_id": "N1ofh23if",
"updated_memory": {
  "context": "user has explicitly stated that he likes vegan and mentioned avoiding dairy products when discussing ice cream options",
  "preference": "prefers vegetarian options and dairy-free dessert alternatives",
  "categories": ["food", "dietary", "desserts"]
}
},
{
"memory":{
  "context": "user researched trips to coastal destinations with public transportation options",
  "preference": "prefers car-free travel to seaside locations",
  "categories": ["travel", "transportation", "vacation"]
},
  "operation": "AddMemory",
},
{
"memory":{
  "context": "user mentioned they didn't sleep well last night and felt tired today",
  "preference": "feeling tired and groggy",
  "categories": ["sleep", "wellness"]
},
  "operation": "SkipMemory",
}]

Like the example, return only the list of JSON with corresponding operation. Do NOT add any explanation.
```

# Summary strategy
Summary strategy

The `SummaryStrategy` is responsible for generating condensed, real-time summaries of conversations within a single session. It captures key topics, main tasks, and decisions, providing a high-level overview of the dialogue.

 **Steps in the strategy** 

The summary strategy includes the following steps:
+  **Consolidation** – Determines whether to write useful information to a new record or an existing record.

 **Strategy output** 

The summary strategy returns XML-formatted output, where each `<topic>` tag represents a distinct area of the user’s memory. XML lets multiple topics to be captured and organized in a single summary while preserving clarity.

A single session can have multiple summary chunks, each representing a portion of the conversation. Together, these chunks form the complete summary for the entire session.

These summary chunks can be retrieved using the [ListMemoryRecords](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_ListMemoryRecords.html) operation with namespace filter, or you can also perform semantic search over the summary chunks using the [RetrieveMemoryRecords](https://docs.aws.amazon.com/bedrock-agentcore/latest/APIReference/API_RetrieveMemoryRecords.html) operation to retrieve only the relevant summary chunks for your query.

 **Examples of insights captured by this strategy include:** 
+ A summary of a support interaction, such as "The user reported an issue with order \$1XYZ-123 , and the agent initiated a replacement."
+ The outcome of a planning session, like "The team agreed to move the project deadline to Friday."

By referencing this summary, an agent can quickly recall the context of a long or complex conversation without needing to re-process the entire history. This is essential for maintaining conversational flow and for efficiently managing the context window of the foundation model.

 **Default namespace** 

 `/strategy/{memoryStrategyId}/actor/{actorId}/session/{sessionId}/` 

**Note**  
 `sessionId` is a required parameter for summary namespace since summaries are generated and maintained at session level.

**Topics**
+ [

# System prompt for summary strategy
](memory-summary-prompt.md)

# System prompt for summary strategy
System prompt for summary strategy

The semantic strategy includes instructions and an output schema in the default system prompt for a single consolidation step.

## Consolidation instructions


There are no consolidation instructions for built-in summary strategy.

## Consolidation output schema


```
You are a summary generator. You will be given a text block, a concise global summary, and a detailed summary you previous generated.
<task>
- Given the contexts(e.g. global summary, detailed previous summary), your goal is to generate
(1) a concise global summary keeping in main target of the conversation, such as the task and the requirements.
(2) a detailed delta summary of the given text block, without repeating the historical detailed summary.
- The previous summary is a context for you to understand the main topics.
- You should only output the delta summary, not the whole summary.
- The generated delta summary should be as concise as possible.
</task>
<extra_task_requirements>
- Summarize with the same language as the given text block.
    - If the messages are in a specific language, summarize with the same language.
</extra_task_requirements>

When you generate global summary you ALWAYS follow the below guidelines:
<guidelines_for_global_summary>
- The global summary should be concise and to the point, only keep the most important information such as the task and the requirements.
- If there is no new high-level information, do not change the global summary. If there is new tasks or requirements, update the global summary.
- The global summary will be pure text wrapped by <global_summary></global_summary> tag.
- The global summary should be no exceed specified word count limit.
- Tracking the size of the global summary by calculating the number of words. If the word count reaches the limit, try to compress the global summary.
</guidelines_for_global_summary>

When you generate detailed delta summaries you ALWAYS follow the below guidelines:
<guidelines_for_delta_summary>
- Each summary MUST be formatted in XML format.
- You should cover all important topics.
- The summary of the topic should be place between <topic name="$TOPIC_NAME"></topic>.
- Only include information that are explicitly stated or can be logically inferred from the conversation.
- Consider the timestamps when you synthesize the summary.
- NEVER start with phrases like 'Here's the summary...', provide directly the summary in the format described below.
</guidelines_for_delta_summary>

The XML format of each summary is as it follows:

<existing_global_summary_word_count>
    $Word Count
</existing_global_summary_word_count>

<global_summary_condense_decision>
    The total word count of the existing global summary is $Total Word Count.
    The word count limit for global summary is $Word Count Limit.
    Since we exceed/do not exceed the word count limit, I need to condense the existing global summary/I don't need to condense the existing global summary.
</global_summary_condense_decision>

<global_summary>
    ...
</global_summary>

<delta_detailed_summary>
    <topic name="$TOPIC_NAME">
        ...
    </topic>
    ...
</delta_detailed_summary>
```

**Note**  
Built-in strategies may use cross-region inference for optimal performance and availability.

Built-in strategies may use [cross-region inference](https://docs.aws.amazon.com/bedrock/latest/userguide/cross-region-inference.html) . Bedrock will automatically select the optimal region within your geography to process your inference request, maximizing available compute resources and model availability, and providing the best customer experience. There’s no additional cost for using cross-region inference.

# Episodic memory strategy
Episodic memory strategy

 **Episodic memory** captures meaningful slices of user and system interactions so applications can recall context in a way that feels focused and relevant. Instead of storing every raw event, it identifies important moments, summarizes them into compact records, and organizes them so the system can retrieve what matters without noise. This creates a more adaptive and intelligent experience by allowing models to understand how context has evolved over time.

Its strength comes from having structured context that spans many interactions, while remaining efficient to store, search, and update. Developers get a balance of freshness, accuracy, and long term continuity without needing to engineer their own summarization pipelines.

 **Reflections** build on episodic records by analyzing past episodes to surface insights, patterns, and higher level conclusions. Instead of simply retrieving what happened, reflections help the system understand why certain events matter and how they should influence future behavior. They turn raw experience into guidance the application can use immediately, giving models a way to learn from history.

Their value comes from lifting information above individual moments, or episodes and creating durable knowledge that improves decision making, personalization, and consistency. This helps applications avoid repeating mistakes, adapt more quickly to user preferences, and behave in a way that feels coherent over long periods.

Customers should use episodic memory in any scenario where understanding a sequence of past interactions improves quality, as well as scenarios where long term improvement matters. Ideal use cases include customer support conversations, agent driven workflows, code assistants that rely on session history, personal productivity tools, troubleshooting or diagnostic flows, and applications that need context grounded in real prior events rather than static profiles.

When you invoke the episodic strategy, AgentCore automatically detects episode completion within conversations and processes events into structured episode records.

 **Steps in the strategy** 

The episodic memory strategy includes the following steps:
+  **Extraction** – Analyzes in-progress episode and determine if episode is complete.
+  **Consolidation** – When an episode is complete, combines extractions into single episode.
+  **Reflection** – Insights are generated across episodes.

 **Strategy output** 

The episodic memory strategy returns XML-formatted output for both episodes and reflections. Each episode is broken down into a situation, intent, assessment, justification, and episode-level reflection. As the interaction proceeds, the episode is analyzed turn-by-turn. You can use this information to better understand the order of operations and tool use.

 **Examples of episodes captured by this strategy** 
+ A code deployment interaction where the agent selected specific tools, encountered an error, and successfully resolved it using an alternative approach.
+ An appointment rescheduling task that captured the user’s intent, the agent’s decision to use a particular tool, and the successful outcome.
+ A data processing workflow that documented which parameters led to optimal performance for a specific data type.

The episodic strategy includes memory extraction and consolidation steps (shared with other strategies). In addition, the episodic strategy also generates reflections, which analyze episodes in the background as interactions take place. Reflections consolidate across multiple episodes to extract broader insights that identify successful strategies and patterns, potential improvements, common failure modes, and lessons learned that span multiple interactions.

 **Examples of reflections include** 
+ Identifying which tool combinations consistently lead to successful outcomes for specific task types.
+ Recognizing patterns in failed attempts and the approaches that resolved them.
+ Extracting best practices from multiple successful episodes with similar scenarios.

The following image schematizes the episodic memory strategy:

![\[Schema of episodic memory strategy.\]](http://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/images/memory/episodic-memory-strategy.png)


By referencing stored episodes, your agent can retrieve relevant past experiences through semantic search and review reflections to avoid repeating failed approaches and to adapt successful strategies to new contexts. This strategy is useful for agents that benefit from identifying patterns, need to continually update information, maintain consistency across interactions, and require context and reasoning rather than static knowledge to make decisions.

**Topics**
+ [

## Namespaces
](#episodic-memory-strategy-namespaces)
+ [

## How to best retrieve episodes to improve agentic performance
](#memory-episodic-retrieve-episodes)
+ [

# System prompts for episodic memory strategy
](memory-episodic-prompt.md)

## Namespaces


When you create a memory with the episodic strategy, you define namespaces under which to store episodes and reflections.

**Note**  
Regardless of the namespace you choose to store episodes in, episodes are always created from a single session.

Episodes are commonly stored in one of the following namespaces:
+  `/strategy/{memoryStrategyId}/` – Store episodes at the strategy level. Episodes that have different actors or that come from different sessions, but that belong to the same strategy, are stored in the same namespace.
+  `/strategy/{memoryStrategyId}/actor/{actorId}/` – Store all episodes at the actor level. Episodes that come from different sessions, but that belong to the same actor, are stored in the same namespace.
+  `/strategy/{memoryStrategyId}/actor/{actorId}/session/{sessionId}/` – Store all episodes at the session level. Episodes that belong to the same session are stored in the same namespace.

Reflections must match the same namespace pattern as episodes, but reflections can be less nested. For example, if your episodic namespace is `/strategy/{memoryStrategyId}/actor/{actorId}/` , you can use the following namespaces for reflections:
+  `/strategy/{memoryStrategyId}/actor/{actorId}/` – Insights will be extracted across all episodes for an actor.
+  `/strategy/{memoryStrategyId}/` – Insights will be extracted across all episodes and across all actors for the strategy.

**Important**  
Because reflections can span multiple actors within the same memory resource, consider the privacy implications of cross-actor analysis when retrieving reflections. Consider using [guardrails in conjunction with memory](https://github.com/awslabs/amazon-bedrock-agentcore-samples/tree/main/01-tutorials/04-AgentCore-memory/03-advanced-patterns/01-guardrails-integration) or reflecting at the actor level if this is a concern.

## How to best retrieve episodes to improve agentic performance


There are multiple ways to utilize episodic memory:
+ Within your agent code
  + When starting a new task, configure your agent to perform a query asking for the most similar episodes and reflections. Also query for relevant episodes and reflection subsequently based on some logic.
  + When creating short-term memories with [CreateEvent](https://docs.aws.amazon.com/bedrock-agentcore-control/latest/APIReference/API_CreateEvent.html) , including `TOOL` results will yield optimal results.
  + For similar successful episodes, linearize the turns within the episode and feed only this to the agent so it focuses on the main steps
+ Manually
  + Look at your reflections or unsuccessful episodes, and consider whether some issues could be solved with updates to your agent code

When performing retrievals, note that memory records are indexed based on "intent" for episodes and "use case" for reflection.

For other memory strategies, memory records are generated on a regular basis throughout an interaction. Episodic memory records, by contrast, will only be generated once AgentCore Memory detects a completed episode. If an episode is not complete, it will take longer to generate because the systems waits to see if the conversation is continued.

# System prompts for episodic memory strategy
System prompt for episodic memory strategy

The episodic memory strategy includes instructions and output schemas in the default prompts for episode extraction, episode consolidation, and reflection generation.

**Topics**
+ [

## Episode extraction instructions
](#episode-extraction-instructions)
+ [

## Episode extraction output schema
](#episode-extraction-output-schema)
+ [

## Episode consolidation instructions
](#episode-generation-instructions)
+ [

## Episode consolidation output schema
](#episode-generation-output-schema)
+ [

## Reflection generation instructions
](#reflection-generation-instructions)
+ [

## Reflection generation output schema
](#reflection-generation-output-schema)

## Episode extraction instructions


```
You are an expert conversation analyst. Your task is to analyze multiple turns of conversation between a user and an AI assistant, focusing on tool usage, input arguments, and reasoning processes.

# Analysis Framework:

## 1. Context Analysis
- Examine all conversation turns provided within <conversation></conversation> tags
- Each turn will be marked with <turn_[id]></turn_[id]> tags
- Identify the circumstances and context that the assistant is responding to in each interaction
- Try to identify or recover the user's overall objective for the entire conversation, which may go beyond the given conversation turns
- When available, incorporate context from <previous_[k]_turns></previous_[k]_turns> tags to understand the user's broader objectives from provided conversation history

## 2. Assistant Analysis (Per Turn)
For EACH conversation turn, analyze the assistant's approach by identifying:
- **Context**: The circumstances and situation the assistant is responding to, and how the assistant's goal connects to the user's overall objective (considering previous interactions when available)
- **Intent**: The assistant's primary goal for this specific conversation turn
- **Action**: Which specific tools were used with what input arguments and sequence of execution. If no tools were used, describe the concrete action/response the assistant took.
- **Reasoning**: Why these tools were chosen, how arguments were determined, and what guided the decision-making process. If no tools were used, explain the reasoning behind the assistant's action/response.

## 3. Outcome Assessment (Per Turn)
For EACH turn, using the next turn's user message:
- Determine whether the assistant successfully achieved its stated goal
- Evaluate the effectiveness of the action taken —- what worked well and what didn't
- Assess whether the user's overall objective has been satisfied, remains in progress, or is evolving

**Do not include any PII (personally identifiable information) or user-specific data in your output.**
```

## Episode extraction output schema


```
You MUST provide a separate <summary> block for EACH conversation turn. Number them sequentially:

<summary>
<summary_turn>
<turn_id>
The id of the turn that matches the input, e.g. 0, 1, 2, etc.
</turn_id>
<situation>
A brief description of the circumstances and context that the assistant is responding to in this turn, including the user's overall objective (which may go beyond this specific turn) and any relevant history from previous interactions
</situation>
<intent>
The assistant's primary goal for this specific interaction—what the assistant aimed to accomplish in this turn
</intent>
<action>
Briefly describe which actions were taken or specific tools were used, what input arguments or parameters were provided to each tool.
</action>
<thought>
Briefly explain why these specific tools or actions were chosen for this task, how the input arguments were determined (whether from the user's explicit request or inferred from context), what constraints or requirements influenced the approach, and what information guided the decision-making process
</thought>
<assessment_assistant>
Start with Yes or No — Whether the assistant successfully achieved its stated goal for this turn
Then add a brief justification based on the relevant context
</assessment_assistant>
<assessment_user>
Yes or No - Whether this turn represents the END OF THE CONVERSATION EPISODE (the user's current inquiry has concluded). Then add a brief explanation by considering messages in the next turns: 1. If this turn represents the END OF THE CONVERSATION EPISODE (the user's current inquiry has concluded), then Yes (it is a clear signal that the user's inquiry has concluded). 2. If the user is continuing with new questions or shifting to other task, then Yes (it is a clear signal that the user is finished with the current task and is ready to move on to the next task). 3. If the user is asking for clarification or more information for the current task, indicating that the user's inquiry is in progress, then No (it is a clear signal that the user's inquiry is not yet concluded). 4. If there is no next turn and there is no clear signal showing that the user's inquiry has concluded, then No.
</assessment_user>
</summary_turn>

<summary_turn>
<turn_id>...</turn_id>
<situation>...</situation>
<intent>...</intent>
<action>...</action>
<thought>...</thought>
<assessment_assistant>...</assessment_assistant>
<assessment_user>...</assessment_user>
</summary_turn>

... continue for all turns ...

<summary_turn>
<turn_id>...</turn_id>
<situation>...</situation>
<intent>...</intent>
<action>...</action>
<thought>...</thought>
<assessment_assistant>...</assessment_assistant>
<assessment_user>...</assessment_user>
</summary_turn>
</summary>

Attention: Only output 1-2 sentences for each field. Be concise and avoid lengthy explanations.
Make sure the number of <summary_turn> is the same as the number of turns in the conversation.<
```

## Episode consolidation instructions


```
You are an expert conversation analyst. Your task is to analyze and summarize conversations between a user and an AI assistant provided within <conversation_turns></conversation_turns> tags.

# Analysis Objectives:
- Provide a comprehensive summary covering all key aspects of the interaction
- Understand the user's underlying needs and motivations
- Evaluate the effectiveness of the conversation in meeting those needs

# Analysis Components:
Examine the conversation through the following dimensions:
**Situation**: The context and circumstances that prompted the user to initiate this conversation—what was happening that led them to seek assistance?
**Intent**: The user's primary goal, the problem they wanted to solve, or the outcome they sought to achieve through this interaction.
**Assessment**: A definitive evaluation of whether the user's goal was successfully achieved.
**Justification**: Clear reasoning supported by specific evidence from the conversation that explains your assessment.
**Reflection**: Key insights from the sequence of turns, focusing on patterns in tool usage, reasoning processes, and decision-making. Identify effective tool selection and argument patterns, reasoning or tool choices to avoid, and actionable recommendations for similar situations.
```

## Episode consolidation output schema


```
# Output Format:

Provide your analysis using the following structured XML format:
<summary>
<situation>
Brief description of the context and circumstances that prompted this conversation—what led the user to seek assistance at this moment
</situation>
<intent>
The user's primary goal, the specific problem they wanted to solve, or the concrete outcome they sought to achieve
</intent>
<assessment>
[Yes/No] — Whether the user's goal was successfully achieved
</assessment>
<justification>
Brief justification for your assessment based on key moments from the conversation
</justification>
<reflection>
Synthesize key insights from the sequence of turns, focusing on patterns in tool usage, reasoning processes, and decision-making that led to success or failure. Identify effective tool selection and argument patterns that worked well, reasoning or tool choices that should be avoided.
</reflection>
</summary>
```

## Reflection generation instructions


```
You are an expert at extracting actionable insights from agent task execution trajectories to build reusable knowledge for future tasks.

# Task:
Analyze the provided episodes and their reflection knowledge, and synthesize new reflection knowledge that can guide future scenarios.

# Input:
- **Main Episode**: The primary trajectory to reflect upon (context, goal, and execution steps)
- **Relevant Episodes**: Relevant trajectories that provide additional context and learning opportunities
- **Existing Reflection Knowledge**: Previously generated reflection insights from related episodes (each with an ID) that can be synthesized or expanded upon

# Reflection Process:

## 1. Pattern Identification
- First, review the main episode's user_intent (goal), description (context), turns (actions and thoughts), and reflection/finding (lessons learned)
- Then, review the relevant episodes and identify NEW patterns across episodes
- Review existing reflection knowledge to understand what's already been learned
- When agent system prompt is available, use it to understand the agent's instructions, capabilities, constraints, and requirements
- Finally, determine if patterns update existing knowledge or represent entirely new insights

## 2. Knowledge Synthesis
For each identified pattern, create a reflection entry with:

### Operator
Specify one of the following operations:
- **add**: This is a completely new reflection that addresses patterns not covered by existing reflection knowledge. Do NOT include an <id> field.
- **update**: This reflection is an updated/improved version of an existing reflection from the input. ONLY use "update" when the new pattern shares the SAME core concept or title as an existing reflection. Include the existing reflection's ID in the <id> field.
    - Length constraint: If updating would make the combined use_cases + hints exceed 300 words, create a NEW reflection with "add" instead. Split the pattern into a more specific, focused insight rather than growing the existing one indefinitely.

### ID (only for "update" operator)
If operator is "update", specify the ID of the existing reflection that this new reflection expands upon. This ID comes from the existing reflection knowledge provided in the input.

### Title
Concise, descriptive name for the insight (e.g., "Error Recovery in API Calls", "Efficient File Search Strategies").
    - When updating, keep the same title or a very similar variant to indicate it's the same conceptual pattern.
    - When adding due to length constraint: Use a more specific variant of the title that narrows the scope (e.g., "Error Recovery in API Calls" → "Error Recovery in API Rate Limiting Scenarios")

### Applied Use Cases
Briefly describe when this applies, including:
- The types of goals (based on episode user_intents) where this insight helps
- The problems or challenges this reflection addresses
- Trigger conditions that signal when to use this knowledge

**When updating an existing reflection (within length limit):** Summarize both the original use cases and the new ones into create a comprehensive view.

### Concrete Hints
Briefly describe actionable guidance based on the identified patterns. Examples to include:
- Tool selection and usage patterns from successful episodes
- What worked well and what to avoid (from failures)
- Decision criteria for applying these patterns
- Specific reasoning details and context that explain WHY these patterns work

**When updating an existing reflection (within length limit):** If the new episodes reveal NEW hints, strategies, or patterns not in the existing reflection, ADD them to this section. Summarize both the original hints and the new ones into create a comprehensive view.

### Confidence Score
Score from 0.1 to 1.0 (0.1 increments) indicating how useful this will be for future agents:
- Higher (0.8-1.0): Clear actionable patterns that consistently led to success/failure
- Medium (0.4-0.7): Useful insights but context-dependent or limited evidence
- Lower (0.1-0.3): Tentative patterns that may not generalize well

When updating, adjust the confidence score based on the additional evidence from new episodes.

## 3. Synthesis Guidelines
- **When updating (within length limits)**:
  - Keep the update concise - integrate new insights efficiently without verbose repetition
  - DO NOT lose valuable information from the original reflection
- **When a reflection becomes too long**: Split it into more specific, focused reflections
    - Each new reflection should be self-contained and focused on a specific sub-pattern
- Focus on **transferable** knowledge, not task-specific details
- Emphasize **why** certain approaches work, not just what was done
- Include both positive patterns (what to do) and negative patterns (what to avoid)
- If the existing reflection knowledge already covers the patterns well and no new insights emerge, generate fewer or no new reflections
```

## Reflection generation output schema


```
<attention>
Aim for high-quality reflection entries that either add new learnings or update existing reflection knowledge.
    - Keep reflections focused and split them into more specific patterns when they grow too long.
    - Keep the use_cases and hints focused: Aim for 100-200 words.
    - If it's growing beyond this, consider if you should create a new, more specific reflection instead.
</attention>

# Output Format:

<reflections>
<reflection>
<operator>[add or update]</operator>
<id>[ID of existing reflection being expanded - only include this field if operator is "update"]</id>
<title>[Clear, descriptive title - keep same/similar to original when updating]</title>
<use_cases>
[Briefly describe the types of goals (from episode user_intents), problems addressed, trigger conditions. When updating: combine original use cases and new ones from recent episodes]
</use_cases>
<hints>
[Briefly describe tool usage patterns, what works, what to avoid, decision criteria, reasoning details. When updating: combine original hints and new insights from recent episodes]
</hints>
<confidence>[0.1 to 1.0]</confidence>
</reflection>
</reflections>
```