Show / Hide Table of Contents

Class CfnFlowVersion.PromptModelInferenceConfigurationProperty

Contains inference configurations related to model inference for a prompt.

Inheritance
object
CfnFlowVersion.PromptModelInferenceConfigurationProperty
Implements
CfnFlowVersion.IPromptModelInferenceConfigurationProperty
Inherited Members
object.GetType()
object.MemberwiseClone()
object.ToString()
object.Equals(object)
object.Equals(object, object)
object.ReferenceEquals(object, object)
object.GetHashCode()
Namespace: Amazon.CDK.AWS.Bedrock
Assembly: Amazon.CDK.Lib.dll
Syntax (csharp)
public class CfnFlowVersion.PromptModelInferenceConfigurationProperty : CfnFlowVersion.IPromptModelInferenceConfigurationProperty
Syntax (vb)
Public Class CfnFlowVersion.PromptModelInferenceConfigurationProperty Implements CfnFlowVersion.IPromptModelInferenceConfigurationProperty
Remarks

For more information, see Inference parameters .

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-bedrock-flowversion-promptmodelinferenceconfiguration.html

ExampleMetadata: fixture=_generated

Examples
// The code below shows an example of how to instantiate this type.
             // The values are placeholders you should change.
             using Amazon.CDK.AWS.Bedrock;

             var promptModelInferenceConfigurationProperty = new PromptModelInferenceConfigurationProperty {
                 MaxTokens = 123,
                 StopSequences = new [] { "stopSequences" },
                 Temperature = 123,
                 TopP = 123
             };

Synopsis

Constructors

PromptModelInferenceConfigurationProperty()

Contains inference configurations related to model inference for a prompt.

Properties

MaxTokens

The maximum number of tokens to return in the response.

StopSequences

A list of strings that define sequences after which the model will stop generating.

Temperature

Controls the randomness of the response.

TopP

The percentage of most-likely candidates that the model considers for the next token.

Constructors

PromptModelInferenceConfigurationProperty()

Contains inference configurations related to model inference for a prompt.

public PromptModelInferenceConfigurationProperty()
Remarks

For more information, see Inference parameters .

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-bedrock-flowversion-promptmodelinferenceconfiguration.html

ExampleMetadata: fixture=_generated

Examples
// The code below shows an example of how to instantiate this type.
             // The values are placeholders you should change.
             using Amazon.CDK.AWS.Bedrock;

             var promptModelInferenceConfigurationProperty = new PromptModelInferenceConfigurationProperty {
                 MaxTokens = 123,
                 StopSequences = new [] { "stopSequences" },
                 Temperature = 123,
                 TopP = 123
             };

Properties

MaxTokens

The maximum number of tokens to return in the response.

public double? MaxTokens { get; set; }
Property Value

double?

Remarks

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-bedrock-flowversion-promptmodelinferenceconfiguration.html#cfn-bedrock-flowversion-promptmodelinferenceconfiguration-maxtokens

StopSequences

A list of strings that define sequences after which the model will stop generating.

public string[]? StopSequences { get; set; }
Property Value

string[]

Remarks

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-bedrock-flowversion-promptmodelinferenceconfiguration.html#cfn-bedrock-flowversion-promptmodelinferenceconfiguration-stopsequences

Temperature

Controls the randomness of the response.

public double? Temperature { get; set; }
Property Value

double?

Remarks

Choose a lower value for more predictable outputs and a higher value for more surprising outputs.

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-bedrock-flowversion-promptmodelinferenceconfiguration.html#cfn-bedrock-flowversion-promptmodelinferenceconfiguration-temperature

TopP

The percentage of most-likely candidates that the model considers for the next token.

public double? TopP { get; set; }
Property Value

double?

Remarks

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-bedrock-flowversion-promptmodelinferenceconfiguration.html#cfn-bedrock-flowversion-promptmodelinferenceconfiguration-topp

Implements

CfnFlowVersion.IPromptModelInferenceConfigurationProperty
Back to top Generated by DocFX