Interface PromptInferenceConfigurationProps

All Superinterfaces:
software.amazon.jsii.JsiiSerializable
All Known Implementing Classes:
PromptInferenceConfigurationProps.Jsii$Proxy

@Generated(value="jsii-pacmak/1.112.0 (build de1bc80)", date="2025-07-24T11:33:25.555Z") @Stability(Experimental) public interface PromptInferenceConfigurationProps extends software.amazon.jsii.JsiiSerializable
(experimental) Properties for creating a prompt inference configuration.

Example:

 Key cmk = Key.Builder.create(this, "cmk").build();
 BedrockFoundationModel claudeModel = BedrockFoundationModel.ANTHROPIC_CLAUDE_SONNET_V1_0;
 IPromptVariant variant1 = PromptVariant.text(TextPromptVariantProps.builder()
         .variantName("variant1")
         .model(claudeModel)
         .promptVariables(List.of("topic"))
         .promptText("This is my first text prompt. Please summarize our conversation on: {{topic}}.")
         .inferenceConfiguration(PromptInferenceConfiguration.text(PromptInferenceConfigurationProps.builder()
                 .temperature(1)
                 .topP(0.999)
                 .maxTokens(2000)
                 .build()))
         .build());
 Prompt prompt1 = Prompt.Builder.create(this, "prompt1")
         .promptName("prompt1")
         .description("my first prompt")
         .defaultVariant(variant1)
         .variants(List.of(variant1))
         .kmsKey(cmk)
         .build();
 
  • Method Details

    • getMaxTokens

      @Stability(Experimental) @Nullable default Number getMaxTokens()
      (experimental) The maximum number of tokens to return in the response.

      Default: - No limit specified

    • getStopSequences

      @Stability(Experimental) @Nullable default List<String> getStopSequences()
      (experimental) A list of strings that define sequences after which the model will stop generating.

      Default: - No stop sequences

    • getTemperature

      @Stability(Experimental) @Nullable default Number getTemperature()
      (experimental) Controls the randomness of the response.

      Higher values make output more random, lower values more deterministic. Valid range is 0.0 to 1.0.

      Default: - Model default temperature

    • getTopP

      @Stability(Experimental) @Nullable default Number getTopP()
      (experimental) The percentage of most-likely candidates that the model considers for the next token.

      Valid range is 0.0 to 1.0.

      Default: - Model default topP

    • builder

      @Stability(Experimental) static PromptInferenceConfigurationProps.Builder builder()
      Returns:
      a PromptInferenceConfigurationProps.Builder of PromptInferenceConfigurationProps