Start a system prompt recommendation
Start a recommendation to generate an optimized system prompt for your agent. The service analyzes agent traces, identifies failure patterns, and produces a revised system prompt that improves performance on the target evaluator.
Note
Recommendations are generated by LLMs. Review and test before applying them.
Code samples
Example
Request parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
|
|
String |
Yes |
A name for the recommendation. Maximum 48 characters. Pattern: |
|
|
String |
Yes |
Must be |
|
|
Object |
Yes |
Contains |
|
|
String |
No |
Optional description. Maximum 4096 characters. |
|
|
String |
No |
Idempotency token. If you retry a request with the same client token, the service returns the existing recommendation instead of creating a new one. |
systemPromptRecommendationConfig fields
| Field | Type | Required | Description |
|---|---|---|---|
|
|
Union |
Yes |
The current system prompt to optimize. Provide either |
|
|
Union |
Yes |
Trace source for analysis. See Trace sources for recommendations. |
|
|
Object |
Yes |
Evaluation configuration specifying the target evaluator. Contains an |
Choosing an evaluator
Select an evaluator aligned with the direction you want to improve. The evaluator you select determines what the recommendation optimizes toward; whatever the evaluator scores highly is what the optimizer pushes the prompt toward.
You can use a built-in evaluator or provide a custom evaluator ARN. Use the following guidelines to choose:
-
If your agent has a clear task to complete (booking, retrieval, a multi-step workflow),
Builtin.GoalSuccessRateis the right signal. -
If your agent is more open-ended and you care about the quality of the interaction itself,
Builtin.Helpfulnessis a better fit. -
If the quality you care about is domain-specific or not captured by a built-in evaluator, use a custom evaluator to best represent the measurement.
In the API, specify the evaluator in the evaluationConfig.evaluators list with exactly one evaluator reference:
"evaluationConfig": { "evaluators": [ {"evaluatorArn": "arn:aws:bedrock-agentcore:::evaluator/Builtin.GoalSuccessRate"} ] }
In the CLI, use the --evaluator flag:
--evaluator Builtin.GoalSuccessRate
System prompt input modes
| Mode | CLI flags | API field |
|---|---|---|
|
Inline text |
|
|
|
Configuration bundle |
|
|
When using a configuration bundle, the result includes a new bundle version with the optimized system prompt applied.
Response
| Field | Type | Description |
|---|---|---|
|
|
String |
Unique identifier for the recommendation. |
|
|
String |
ARN of the recommendation. |
|
|
String |
The name you specified. |
|
|
String |
|
|
|
String |
Initial status: |
|
|
Timestamp |
When the recommendation was created. |
|
|
Timestamp |
When the recommendation was last updated. |
Recommendation result
When the recommendation reaches COMPLETED status (retrieved via Get a recommendation), the result contains:
| Field | Type | Description |
|---|---|---|
|
|
String |
The optimized system prompt text. |
|
|
Object |
Present when the input was a configuration bundle. Contains |
|
|
String |
Present if the recommendation failed. Error code describing the failure. |
|
|
String |
Present if the recommendation failed. Human-readable error description. |
Errors
| Error | HTTP status | Description |
|---|---|---|
|
|
400 |
Invalid request parameters. Check field constraints and required fields. |
|
|
403 |
Insufficient permissions. Verify IAM policies. |
|
|
409 |
A recommendation with the same client token already exists with different parameters. |
|
|
402 |
You have exceeded the maximum number of concurrent recommendations. |
|
|
429 |
Request rate exceeded. Retry with exponential backoff. |
|
|
500 |
Service-side error. Retry the request. |