Interface PromptInferenceConfigurationProps
- All Superinterfaces:
software.amazon.jsii.JsiiSerializable
- All Known Implementing Classes:
PromptInferenceConfigurationProps.Jsii$Proxy
@Generated(value="jsii-pacmak/1.119.0 (build 1634eac)",
date="2025-11-17T14:41:04.518Z")
@Stability(Experimental)
public interface PromptInferenceConfigurationProps
extends software.amazon.jsii.JsiiSerializable
(experimental) Properties for creating a prompt inference configuration.
Example:
Key cmk = Key.Builder.create(this, "cmk").build();
BedrockFoundationModel claudeModel = BedrockFoundationModel.ANTHROPIC_CLAUDE_SONNET_V1_0;
IPromptVariant variant1 = PromptVariant.text(TextPromptVariantProps.builder()
.variantName("variant1")
.model(claudeModel)
.promptVariables(List.of("topic"))
.promptText("This is my first text prompt. Please summarize our conversation on: {{topic}}.")
.inferenceConfiguration(PromptInferenceConfiguration.text(PromptInferenceConfigurationProps.builder()
.temperature(1)
.topP(0.999)
.maxTokens(2000)
.build()))
.build());
Prompt prompt1 = Prompt.Builder.create(this, "prompt1")
.promptName("prompt1")
.description("my first prompt")
.defaultVariant(variant1)
.variants(List.of(variant1))
.kmsKey(cmk)
.build();
-
Nested Class Summary
Nested ClassesModifier and TypeInterfaceDescriptionstatic final classA builder forPromptInferenceConfigurationPropsstatic final classAn implementation forPromptInferenceConfigurationProps -
Method Summary
Modifier and TypeMethodDescriptionbuilder()default Number(experimental) The maximum number of tokens to return in the response.(experimental) A list of strings that define sequences after which the model will stop generating.default Number(experimental) Controls the randomness of the response.default NumbergetTopP()(experimental) The percentage of most-likely candidates that the model considers for the next token.Methods inherited from interface software.amazon.jsii.JsiiSerializable
$jsii$toJson
-
Method Details
-
getMaxTokens
(experimental) The maximum number of tokens to return in the response.Default: - No limit specified
-
getStopSequences
(experimental) A list of strings that define sequences after which the model will stop generating.Default: - No stop sequences
-
getTemperature
(experimental) Controls the randomness of the response.Higher values make output more random, lower values more deterministic. Valid range is 0.0 to 1.0.
Default: - Model default temperature
-
getTopP
(experimental) The percentage of most-likely candidates that the model considers for the next token.Valid range is 0.0 to 1.0.
Default: - Model default topP
-
builder
-