

# Mistral AI models
<a name="model-parameters-mistral"></a>

This section describes the request parameters and response fields for Mistral AI models. Use this information to make inference calls to Mistral AI models with the [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) and [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html) (streaming) operations. This section also includes Python code examples that shows how to call Mistral AI models. To use a model in an inference operation, you need the model ID for the model. To get the model ID, see [Supported foundation models in Amazon Bedrock](models-supported.md). Some models also work with the [Converse API](conversation-inference.md). To check if a specific Mistral AI model supports a feature, see [models at a glance](model-cards.md). For more code examples, see [Code examples for Amazon Bedrock using AWS SDKs](service_code_examples.md).

Foundation models in Amazon Bedrock support input and output modalities, which vary from model to model. To check the modalities that Mistral AI models support, see [Supported foundation models in Amazon Bedrock](models-supported.md). To check which Amazon Bedrock features the Mistral AI models support, see [Supported foundation models in Amazon Bedrock](models-supported.md). To check which AWS Regions that Mistral AI models are available in, see [Supported foundation models in Amazon Bedrock](models-supported.md).

When you make inference calls with Mistral AI models, you include a prompt for the model. For general information about creating prompts for the models that Amazon Bedrock supports, see [Prompt engineering concepts](prompt-engineering-guidelines.md). For Mistral AI specific prompt information, see the [Mistral AI prompt engineering guide](https://docs.mistral.ai/guides/prompting_capabilities/).

**Topics**
+ [Mistral AI text completion](model-parameters-mistral-text-completion.md)
+ [Mistral AI chat completion](model-parameters-mistral-chat-completion.md)
+ [Mistral AI Large (24.07) parameters and inference](model-parameters-mistral-large-2407.md)
+ [Pixtral Large (25.02) parameters and inference](model-parameters-mistral-pixtral-large.md)

# Mistral AI text completion
<a name="model-parameters-mistral-text-completion"></a>

The Mistral AI text completion API lets you generate text with a Mistral AI model.

You make inference requests to Mistral AI models with [InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) or [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html) (streaming). 

Mistral AI models are available under the [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt). For more information about using Mistral AI models, see the [Mistral AI documentation](https://docs.mistral.ai/).

**Topics**
+ [Supported models](#mistral--text-completion-supported-models)
+ [Request and Response](#model-parameters-mistral-text-completion-request-response)
+ [Code example](#api-inference-examples-mistral-text-completion)

## Supported models
<a name="mistral--text-completion-supported-models"></a>

You can use following Mistral AI models.
+ Mistral 7B Instruct
+ Mixtral 8X7B Instruct
+ Mistral Large
+ Mistral Small

You need the model ID for the model that you want to use. To get the model ID, see [Supported foundation models in Amazon Bedrock](models-supported.md). 

## Request and Response
<a name="model-parameters-mistral-text-completion-request-response"></a>

------
#### [ Request ]

The Mistral AI models have the following inference parameters. 

```
{
    "prompt": string,
    "max_tokens" : int,
    "stop" : [string],    
    "temperature": float,
    "top_p": float,
    "top_k": int
}
```

The following are required parameters.
+  **prompt** – (Required) The prompt that you want to pass to the model, as shown in the following example. 

  ```
  <s>[INST] What is your favourite condiment? [/INST]
  ```

  The following example shows how to format is a multi-turn prompt. 

  ```
  <s>[INST] What is your favourite condiment? [/INST]
  Well, I'm quite partial to a good squeeze of fresh lemon juice. 
  It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> 
  [INST] Do you have mayonnaise recipes? [/INST]
  ```

  Text for the user role is inside the `[INST]...[/INST]` tokens, text outside is the assistant role. The beginning and ending of a string are represented by the `<s>` (beginning of string) and `</s>` (end of string) tokens. For information about sending a chat prompt in the correct format, see [Chat template](https://docs.mistral.ai/models/#chat-template) in the Mistral AI documentation. 

The following are optional parameters.
+ **max\$1tokens** – Specify the maximum number of tokens to use in the generated response. The model truncates the response once the generated text exceeds `max_tokens`.     
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-mistral-text-completion.html)
+ **stop** – A list of stop sequences that if generated by the model, stops the model from generating further output.     
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-mistral-text-completion.html)
+ **temperature** – Controls the randomness of predictions made by the model. For more information, see [Influence response generation with inference parameters](inference-parameters.md).     
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-mistral-text-completion.html)
+ **top\$1p** – Controls the diversity of text that the model generates by setting the percentage of most-likely candidates that the model considers for the next token. For more information, see [Influence response generation with inference parameters](inference-parameters.md).    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-mistral-text-completion.html)
+ **top\$1k** – Controls the number of most-likely candidates that the model considers for the next token. For more information, see [Influence response generation with inference parameters](inference-parameters.md).    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-mistral-text-completion.html)

------
#### [ Response ]

The `body` response from a call to `InvokeModel` is the following:

```
{
  "outputs": [
    {
        "text": string,
        "stop_reason": string
    }
  ]
}
```

The `body` response has the following fields:
+ **outputs** – A list of outputs from the model. Each output has the following fields.
  + **text** – The text that the model generated. 
  + **stop\$1reason** – The reason why the response stopped generating text. Possible values are:
    + **stop** – The model has finished generating text for the input prompt. The model stops because it has no more content to generate or if the model generates one of the stop sequences that you define in the `stop` request parameter.
    + **length** – The length of the tokens for the generated text exceeds the value of `max_tokens` in the call to `InvokeModel` (`InvokeModelWithResponseStream`, if you are streaming output). The response is truncated to `max_tokens` tokens. 

------

## Code example
<a name="api-inference-examples-mistral-text-completion"></a>

This examples shows how to call the Mistral 7B Instruct model.

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
"""
Shows how to generate text using a Mistral AI model.
"""
import json
import logging
import boto3


from botocore.exceptions import ClientError

logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)


def generate_text(model_id, body):
    """
    Generate text using a Mistral AI model.
    Args:
        model_id (str): The model ID to use.
        body (str) : The request body to use.
    Returns:
        JSON: The response from the model.
    """

    logger.info("Generating text with Mistral AI model %s", model_id)

    bedrock = boto3.client(service_name='bedrock-runtime')

    response = bedrock.invoke_model(
        body=body,
        modelId=model_id
    )

    logger.info("Successfully generated text with Mistral AI model %s", model_id)

    return response


def main():
    """
    Entrypoint for Mistral AI example.
    """

    logging.basicConfig(level=logging.INFO,
                        format="%(levelname)s: %(message)s")

    try:
        model_id = 'mistral.mistral-7b-instruct-v0:2'

        prompt = """<s>[INST] In Bash, how do I list all text files in the current directory
          (excluding subdirectories) that have been modified in the last month? [/INST]"""

        body = json.dumps({
            "prompt": prompt,
            "max_tokens": 400,
            "temperature": 0.7,
            "top_p": 0.7,
            "top_k": 50
        })

        response = generate_text(model_id=model_id,
                                 body=body)

        response_body = json.loads(response.get('body').read())

        outputs = response_body.get('outputs')

        for index, output in enumerate(outputs):

            print(f"Output {index + 1}\n----------")
            print(f"Text:\n{output['text']}\n")
            print(f"Stop reason: {output['stop_reason']}\n")

    except ClientError as err:
        message = err.response["Error"]["Message"]
        logger.error("A client error occurred: %s", message)
        print("A client error occured: " +
              format(message))
    else:
        print(f"Finished generating text with Mistral AI model {model_id}.")


if __name__ == "__main__":
    main()
```

# Mistral AI chat completion
<a name="model-parameters-mistral-chat-completion"></a>

The Mistral AI chat completion API lets create conversational applications.

**Tip**  
You can use the Mistral AI chat completion API with the base inference operations ([InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) or [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html)). However, we recommend that you use the Converse API to implement messages in your application. The Converse API provides a unified set of parameters that work across all models that support messages. For more information, see [Carry out a conversation with the Converse API operations](conversation-inference.md).

Mistral AI models are available under the [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt). For more information about using Mistral AI models, see the [Mistral AI documentation](https://docs.mistral.ai/).

**Topics**
+ [Supported models](#mistral-supported-models-chat-completion)
+ [Request and Response](#model-parameters-mistral-chat-completion-request-response)

## Supported models
<a name="mistral-supported-models-chat-completion"></a>

You can use following Mistral AI models.
+ Mistral Large

You need the model ID for the model that you want to use. To get the model ID, see [Supported foundation models in Amazon Bedrock](models-supported.md). 

## Request and Response
<a name="model-parameters-mistral-chat-completion-request-response"></a>

------
#### [ Request ]

The Mistral AI models have the following inference parameters. 

```
{
    "messages": [
        {
            "role": "system"|"user"|"assistant",
            "content": str
        },
        {
            "role": "assistant",
            "content": "",
            "tool_calls": [
                {
                    "id": str,
                    "function": {
                        "name": str,
                        "arguments": str
                    }
                }
            ]
        },
        {
            "role": "tool",
            "tool_call_id": str,
            "content": str
        }
    ],
    "tools": [
        {
            "type": "function",
            "function": {
                "name": str,
                "description": str,
                "parameters": dict
            }
        }
    ],
    "tool_choice": "auto"|"any"|"none",
    "max_tokens": int,
    "top_p": float,
    "temperature": float
}
```

The following are required parameters.
+  **messages** – (Required) The messages that you want to pass to the model.
  + **role** – The role for the message. Valid values are:
    + **system** – Sets the behavior and context for the model in the conversation. 
    + **user** – The user message to send to the model.
    + **assistant** – The response from the model.
  + **content** – The content for the message.

  ```
  [
      {
          "role": "user",
          "content": "What is the most popular song on WZPZ?"
      }
  ]
  ```

  To pass a tool result, use JSON with the following fields.
  + **role** – The role for the message. The value must be `tool`. 
  + **tool\$1call\$1id** – The ID of the tool request. You get the ID from the `tool_calls` fields in the response from the previous request. 
  + **content** – The result from the tool.

  The following example is the result from a tool that gets the most popular song on a radio station.

  ```
  {
      "role": "tool",
      "tool_call_id": "v6RMMiRlT7ygYkT4uULjtg",
      "content": "{\"song\": \"Elemental Hotel\", \"artist\": \"8 Storey Hike\"}"
  }
  ```

The following are optional parameters.
+  **tools** – Definitions of tools that the model may use.

  If you include `tools` in your request, the model may return a `tool_calls` field in the message that represent the model's use of those tools. You can then run those tools using the tool input generated by the model and then optionally return results back to the model using `tool_result` content blocks.

  The following example is for a tool that gets the most popular songs on a radio station.

  ```
  [
      {
          "type": "function",
          "function": {
              "name": "top_song",
              "description": "Get the most popular song played on a radio station.",
              "parameters": {
                  "type": "object",
                  "properties": {
                      "sign": {
                          "type": "string",
                          "description": "The call sign for the radio station for which you want the most popular song. Example calls signs are WZPZ and WKRP."
                      }
                  },
                  "required": [
                      "sign"
                  ]
              }
          }
      }
  ]
  ```
+  **tool\$1choice** – Specifies how functions are called. If set to `none` the model won't call a function and will generate a message instead. If set to `auto` the model can choose to either generate a message or call a function. If set to `any` the model is forced to call a function.
+ **max\$1tokens** – Specify the maximum number of tokens to use in the generated response. The model truncates the response once the generated text exceeds `max_tokens`.     
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-mistral-chat-completion.html)
+ **temperature** – Controls the randomness of predictions made by the model. For more information, see [Influence response generation with inference parameters](inference-parameters.md).     
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-mistral-chat-completion.html)
+ **top\$1p** – Controls the diversity of text that the model generates by setting the percentage of most-likely candidates that the model considers for the next token. For more information, see [Influence response generation with inference parameters](inference-parameters.md).    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-mistral-chat-completion.html)

------
#### [ Response ]

The `body` response from a call to `InvokeModel` is the following:

```
{
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": str,
                "tool_calls": [...]
            },
            "stop_reason": "stop"|"length"|"tool_calls"
        }
    ]
}
```

The `body` response has the following fields:
+ **choices** – The output from the model. fields.
  + **index** – The index for the message. 
  + **message** – The message from the model. 
    + **role** – The role for the message. 
    + **content** – The content for the message. 
    + **tool\$1calls** – If the value of `stop_reason` is `tool_calls`, this field contains a list of tool requests that the model wants you to run. 
      + **id** – The ID for the tool request. 
      + **function** – The function that the model is requesting. 
        + **name** – The name of the function. 
        + **arguments** – The arguments to pass to the tool 

      The following is an example request for a tool that gets the top song on a radio station.

      ```
      [
                          {
                              "id": "v6RMMiRlT7ygYkT4uULjtg",
                              "function": {
                                  "name": "top_song",
                                  "arguments": "{\"sign\": \"WZPZ\"}"
                              }
                          }
                      ]
      ```
  + **stop\$1reason** – The reason why the response stopped generating text. Possible values are:
    + **stop** – The model has finished generating text for the input prompt. The model stops because it has no more content to generate or if the model generates one of the stop sequences that you define in the `stop` request parameter.
    + **length** – The length of the tokens for the generated text exceeds the value of `max_tokens`. The response is truncated to `max_tokens` tokens. 
    + **tool\$1calls** – The model is requesting that you run a tool.

------

# Mistral AI Large (24.07) parameters and inference
<a name="model-parameters-mistral-large-2407"></a>

The Mistral AI chat completion API lets you create conversational applications. You can also use the Amazon Bedrock Converse API with this model. You can use tools to make function calls.

**Tip**  
You can use the Mistral AI chat completion API with the base inference operations ([InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) or [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html)). However, we recommend that you use the Converse API to implement messages in your application. The Converse API provides a unified set of parameters that work across all models that support messages. For more information, see [Carry out a conversation with the Converse API operations](conversation-inference.md).

Mistral AI models are available under the [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt). For more information about using Mistral AI models, see the [Mistral AI documentation](https://docs.mistral.ai/).

**Topics**
+ [Supported models](#mistral-supported-models-chat-completion)
+ [Request and Response Examples](#model-parameters-mistral-large-2407-request-response)

## Supported models
<a name="mistral-supported-models-chat-completion"></a>

You can use following Mistral AI models with the code examples on this page..
+ Mistral Large 2 (24.07)

You need the model ID for the model that you want to use. To get the model ID, see [Supported foundation models in Amazon Bedrock](models-supported.md). 

## Request and Response Examples
<a name="model-parameters-mistral-large-2407-request-response"></a>

------
#### [ Request ]

Mistral AI Large (24.07) invoke model example. 

```
import boto3 
import json

bedrock = session.client('bedrock-runtime', 'us-west-2') 
response = bedrock.invoke_model( 
        modelId='mistral.mistral-large-2407-v1:0', 
        body=json.dumps({
            'messages': [ 
                { 
                    'role': 'user', 
                    'content': 'which llm are you?' 
                } 
             ], 
         }) 
       ) 

print(json.dumps(json.loads(response['body']), indent=4))
```

------
#### [ Converse ]

Mistral AI Large (24.07) converse example. 

```
import boto3 
import json

bedrock = session.client('bedrock-runtime', 'us-west-2')
response = bedrock.converse( 
    modelId='mistral.mistral-large-2407-v1:0', 
    messages=[ 
        { 
            'role': 'user', 
            'content': [ 
                { 
                    'text': 'which llm are you?' 
                } 
             ] 
          } 
     ] 
  ) 

print(json.dumps(json.loads(response['body']), indent=4))
```

------
#### [ invoke\$1model\$1with\$1response\$1stream ]

Mistral AI Large (24.07) invoke\$1model\$1with\$1response\$1stream example. 

```
import boto3 
import json

bedrock = session.client('bedrock-runtime', 'us-west-2')
response = bedrock.invoke_model_with_response_stream(
    "body": json.dumps({
        "messages": [{"role": "user", "content": "What is the best French cheese?"}],
        }),
        "modelId":"mistral.mistral-large-2407-v1:0"
)

stream = response.get('body')
if stream:
        for event in stream:
            chunk=event.get('chunk')
            if chunk:
                chunk_obj=json.loads(chunk.get('bytes').decode())
                print(chunk_obj)
```

------
#### [ converse\$1stream ]

Mistral AI Large (24.07) converse\$1stream example. 

```
import boto3 
import json

bedrock = session.client('bedrock-runtime', 'us-west-2')
mistral_params = {
    "messages": [{
            "role": "user","content": [{"text": "What is the best French cheese? "}]
         }],
            "modelId":"mistral.mistral-large-2407-v1:0",
        }
    response = bedrock.converse_stream(**mistral_params)
    stream = response.get('stream')
    if stream:
        for event in stream:

            if 'messageStart' in event:
                print(f"\nRole: {event['messageStart']['role']}")

            if 'contentBlockDelta' in event:
                print(event['contentBlockDelta']['delta']['text'], end="")

            if 'messageStop' in event:
                print(f"\nStop reason: {event['messageStop']['stopReason']}")

            if 'metadata' in event:
                metadata = event['metadata']
                if 'usage' in metadata:
                    print("\nToken usage ... ")
                    print(f"Input tokens: {metadata['usage']['inputTokens']}")
                    print(
                        f":Output tokens: {metadata['usage']['outputTokens']}")
                    print(f":Total tokens: {metadata['usage']['totalTokens']}")
                if 'metrics' in event['metadata']:
                    print(
                        f"Latency: {metadata['metrics']['latencyMs']} milliseconds")
```

------
#### [ JSON Output ]

Mistral AI Large (24.07) JSON output example. 

```
import boto3 
import json

bedrock = session.client('bedrock-runtime', 'us-west-2')
mistral_params = {
        "body": json.dumps({
            "messages": [{"role": "user", "content": "What is the best French meal? Return the name and the ingredients in short JSON object."}]
        }),
        "modelId":"mistral.mistral-large-2407-v1:0",
    }
response = bedrock.invoke_model(**mistral_params)

body = response.get('body').read().decode('utf-8')
print(json.loads(body))
```

------
#### [ Tooling ]

Mistral AI Large (24.07) tools example. 

```
data = {
    'transaction_id': ['T1001', 'T1002', 'T1003', 'T1004', 'T1005'],
    'customer_id': ['C001', 'C002', 'C003', 'C002', 'C001'],
    'payment_amount': [125.50, 89.99, 120.00, 54.30, 210.20],
    'payment_date': ['2021-10-05', '2021-10-06', '2021-10-07', '2021-10-05', '2021-10-08'],
    'payment_status': ['Paid', 'Unpaid', 'Paid', 'Paid', 'Pending']
}

# Create DataFrame
df = pd.DataFrame(data)


def retrieve_payment_status(df: data, transaction_id: str) -> str:
    if transaction_id in df.transaction_id.values: 
        return json.dumps({'status': df[df.transaction_id == transaction_id].payment_status.item()})
    return json.dumps({'error': 'transaction id not found.'})

def retrieve_payment_date(df: data, transaction_id: str) -> str:
    if transaction_id in df.transaction_id.values: 
        return json.dumps({'date': df[df.transaction_id == transaction_id].payment_date.item()})
    return json.dumps({'error': 'transaction id not found.'})

tools = [
    {
        "type": "function",
        "function": {
            "name": "retrieve_payment_status",
            "description": "Get payment status of a transaction",
            "parameters": {
                "type": "object",
                "properties": {
                    "transaction_id": {
                        "type": "string",
                        "description": "The transaction id.",
                    }
                },
                "required": ["transaction_id"],
            },
        },
    },
    {
        "type": "function",
        "function": {
            "name": "retrieve_payment_date",
            "description": "Get payment date of a transaction",
            "parameters": {
                "type": "object",
                "properties": {
                    "transaction_id": {
                        "type": "string",
                        "description": "The transaction id.",
                    }
                },
                "required": ["transaction_id"],
            },
        },
    }
]

names_to_functions = {
    'retrieve_payment_status': functools.partial(retrieve_payment_status, df=df),
    'retrieve_payment_date': functools.partial(retrieve_payment_date, df=df)
}



test_tool_input = "What's the status of my transaction T1001?"
message = [{"role": "user", "content": test_tool_input}]


def invoke_bedrock_mistral_tool():
   
    mistral_params = {
        "body": json.dumps({
            "messages": message,
            "tools": tools           
        }),
        "modelId":"mistral.mistral-large-2407-v1:0",
    }
    response = bedrock.invoke_model(**mistral_params)
    body = response.get('body').read().decode('utf-8')
    body = json.loads(body)
    choices = body.get("choices")
    message.append(choices[0].get("message"))

    tool_call = choices[0].get("message").get("tool_calls")[0]
    function_name = tool_call.get("function").get("name")
    function_params = json.loads(tool_call.get("function").get("arguments"))
    print("\nfunction_name: ", function_name, "\nfunction_params: ", function_params)
    function_result = names_to_functions[function_name](**function_params)

    message.append({"role": "tool", "content": function_result, "tool_call_id":tool_call.get("id")})
   
    new_mistral_params = {
        "body": json.dumps({
                "messages": message,
                "tools": tools           
        }),
        "modelId":"mistral.mistral-large-2407-v1:0",
    }
    response = bedrock.invoke_model(**new_mistral_params)
    body = response.get('body').read().decode('utf-8')
    body = json.loads(body)
    print(body)
invoke_bedrock_mistral_tool()
```

------

# Pixtral Large (25.02) parameters and inference
<a name="model-parameters-mistral-pixtral-large"></a>

Pixtral Large 25.02 is a 124B parameter multimodal model that combines state-of-the-art image understanding with powerful text processing capabilities. AWS is the first cloud provider to deliver Pixtral Large (25.02) as a fully-managed, serverless model. This model delivers frontier-class performance when performing document analysis, chart interpretation, and natural image understanding tasks, while maintaining the advanced text capabilities of Mistral Large 2.

With a 128K context window, Pixtral Large 25.02 achieves best-in-class performance on key benchmarks including MathVista, DocVQA, and VQAv2. The model features comprehensive multilingual support across many languages and is trained on over 80 programming languages. Key capabilities include advanced mathematical reasoning, native function calling, JSON outputting, and robust context adherence for RAG applications.

The Mistral AI chat completion API lets you create conversational applications. You can also use the Amazon Bedrock Converse API with this model. You can use tools to make function calls.

**Tip**  
You can use the Mistral AI chat completion API with the base inference operations ([InvokeModel](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) or [InvokeModelWithResponseStream](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html)). However, we recommend that you use the Converse API to implement messages in your application. The Converse API provides a unified set of parameters that work across all models that support messages. For more information, see [Carry out a conversation with the Converse API operations](conversation-inference.md).

The Mistral AI Pixtral Large model is available under the [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md). For more information about using Mistral AI models, see the [Mistral AI documentation](https://docs.mistral.ai/).

**Topics**
+ [Supported models](#mistral-supported-models-chat-completion)
+ [Request and Response Examples](#model-parameters-pixtral-large-2502-request-response)

## Supported models
<a name="mistral-supported-models-chat-completion"></a>

You can use following Mistral AI models with the code examples on this page..
+ Pixtral Large (25.02)

You need the model ID for the model that you want to use. To get the model ID, see [Supported foundation models in Amazon Bedrock](models-supported.md). 

## Request and Response Examples
<a name="model-parameters-pixtral-large-2502-request-response"></a>

------
#### [ Request ]

Pixtral Large (25.02) invoke model example.

```
import boto3
import json
import base64


input_image = "image.png"
with open(input_image, "rb") as f:
    image = f.read()

image_bytes = base64.b64encode(image).decode("utf-8")

bedrock = boto3.client(
    service_name='bedrock-runtime',
    region_name="us-east-1")


request_body = {
    "messages" : [
        {
          "role" : "user",
          "content" : [
            {
              "text": "Describe this picture:",
              "type": "text"
            },
            {
              "type" : "image_url",
              "image_url" : {
                "url" : f"data:image/png;base64,{image_bytes}"
              }
            }
          ]
        }
      ],
      "max_tokens" : 10
    }

response = bedrock.invoke_model(
        modelId='us.mistral.pixtral-large-2502-v1:0',
        body=json.dumps(request_body)
       )


print(json.dumps(json.loads(response.get('body').read()), indent=4))
```

------
#### [ Converse ]

Pixtral Large (25.02) Converse example.

```
import boto3
import json
import base64

input_image = "image.png"
with open(input_image, "rb") as f:
    image_bytes = f.read()


bedrock = boto3.client(
    service_name='bedrock-runtime',
    region_name="us-east-1")

messages =[
    {
        "role" : "user",
        "content" : [
            {
              "text": "Describe this picture:"
            },
            {
                "image": {
                    "format": "png",
                    "source": {
                        "bytes": image_bytes
                    }
                }
            }
        ]
    }
]

response = bedrock.converse(
        modelId='mistral.pixtral-large-2502-v1:0',
        messages=messages
       )

print(json.dumps(response.get('output'), indent=4))
```

------
#### [ invoke\$1model\$1with\$1response\$1stream ]

Pixtral Large (25.02) invoke\$1model\$1with\$1response\$1stream example. 

```
import boto3
import json
import base64


input_image = "image.png"
with open(input_image, "rb") as f:
    image = f.read()

image_bytes = base64.b64encode(image).decode("utf-8")

bedrock = boto3.client(
    service_name='bedrock-runtime',
    region_name="us-east-1")


request_body = {
    "messages" : [
        {
          "role" : "user",
          "content" : [
            {
              "text": "Describe this picture:",
              "type": "text"
            },
            {
              "type" : "image_url",
              "image_url" : {
                "url" : f"data:image/png;base64,{image_bytes}"
              }
            }
          ]
        }
      ],
      "max_tokens" : 10
    }

response = bedrock.invoke_model_with_response_stream(
        modelId='us.mistral.pixtral-large-2502-v1:0',
        body=json.dumps(request_body)
       )

stream = response.get('body')
if stream:
    for event in stream:
        chunk=event.get('chunk')
        if chunk:
            chunk_obj=json.loads(chunk.get('bytes').decode())
            print(chunk_obj)
```

------
#### [ converse\$1stream ]

Pixtral Large (25.02) converse\$1stream example. 

```
import boto3
import json
import base64

input_image = "image.png"
with open(input_image, "rb") as f:
    image_bytes = f.read()


bedrock = boto3.client(
    service_name='bedrock-runtime',
    region_name="us-east-1")

messages =[
    {
        "role" : "user",
        "content" : [
            {
              "text": "Describe this picture:"
            },
            {
                "image": {
                    "format": "png",
                    "source": {
                        "bytes": image_bytes
                    }
                }
            }
        ]
    }
]

response = bedrock.converse_stream(
        modelId='mistral.pixtral-large-2502-v1:0',
        messages=messages
       )

stream = response.get('stream')
if stream:
    for event in stream:
        if 'messageStart' in event:
            print(f"\nRole: {event['messageStart']['role']}")

        if 'contentBlockDelta' in event:
            print(event['contentBlockDelta']['delta']['text'], end="")

        if 'messageStop' in event:
            print(f"\nStop reason: {event['messageStop']['stopReason']}")

        if 'metadata' in event:
            metadata = event['metadata']
            if 'usage' in metadata:
                print("\nToken usage ... ")
                print(f"Input tokens: {metadata['usage']['inputTokens']}")
                print(
                    f":Output tokens: {metadata['usage']['outputTokens']}")
                print(f":Total tokens: {metadata['usage']['totalTokens']}")
            if 'metrics' in event['metadata']:
                print(
                    f"Latency: {metadata['metrics']['latencyMs']} milliseconds")
```

------
#### [ JSON Output ]

Pixtral Large (25.02) JSON output example. 

```
import boto3 
import json

bedrock = session.client('bedrock-runtime', 'us-west-2')
mistral_params = {
        "body": json.dumps({
            "messages": [{"role": "user", "content": "What is the best French meal? Return the name and the ingredients in short JSON object."}]
        }),
        "modelId":"us.mistral.pixtral-large-2502-v1:0",
    }
response = bedrock.invoke_model(**mistral_params)

body = response.get('body').read().decode('utf-8')
print(json.loads(body))
```

------
#### [ Tooling ]

Pixtral Large (25.02) tools example. 

```
data = {
    'transaction_id': ['T1001', 'T1002', 'T1003', 'T1004', 'T1005'],
    'customer_id': ['C001', 'C002', 'C003', 'C002', 'C001'],
    'payment_amount': [125.50, 89.99, 120.00, 54.30, 210.20],
    'payment_date': ['2021-10-05', '2021-10-06', '2021-10-07', '2021-10-05', '2021-10-08'],
    'payment_status': ['Paid', 'Unpaid', 'Paid', 'Paid', 'Pending']
}

# Create DataFrame
df = pd.DataFrame(data)


def retrieve_payment_status(df: data, transaction_id: str) -> str:
    if transaction_id in df.transaction_id.values: 
        return json.dumps({'status': df[df.transaction_id == transaction_id].payment_status.item()})
    return json.dumps({'error': 'transaction id not found.'})

def retrieve_payment_date(df: data, transaction_id: str) -> str:
    if transaction_id in df.transaction_id.values: 
        return json.dumps({'date': df[df.transaction_id == transaction_id].payment_date.item()})
    return json.dumps({'error': 'transaction id not found.'})

tools = [
    {
        "type": "function",
        "function": {
            "name": "retrieve_payment_status",
            "description": "Get payment status of a transaction",
            "parameters": {
                "type": "object",
                "properties": {
                    "transaction_id": {
                        "type": "string",
                        "description": "The transaction id.",
                    }
                },
                "required": ["transaction_id"],
            },
        },
    },
    {
        "type": "function",
        "function": {
            "name": "retrieve_payment_date",
            "description": "Get payment date of a transaction",
            "parameters": {
                "type": "object",
                "properties": {
                    "transaction_id": {
                        "type": "string",
                        "description": "The transaction id.",
                    }
                },
                "required": ["transaction_id"],
            },
        },
    }
]

names_to_functions = {
    'retrieve_payment_status': functools.partial(retrieve_payment_status, df=df),
    'retrieve_payment_date': functools.partial(retrieve_payment_date, df=df)
}



test_tool_input = "What's the status of my transaction T1001?"
message = [{"role": "user", "content": test_tool_input}]


def invoke_bedrock_mistral_tool():
   
    mistral_params = {
        "body": json.dumps({
            "messages": message,
            "tools": tools           
        }),
        "modelId":"us.mistral.pixtral-large-2502-v1:0",
    }
    response = bedrock.invoke_model(**mistral_params)
    body = response.get('body').read().decode('utf-8')
    body = json.loads(body)
    choices = body.get("choices")
    message.append(choices[0].get("message"))

    tool_call = choices[0].get("message").get("tool_calls")[0]
    function_name = tool_call.get("function").get("name")
    function_params = json.loads(tool_call.get("function").get("arguments"))
    print("\nfunction_name: ", function_name, "\nfunction_params: ", function_params)
    function_result = names_to_functions[function_name](**function_params)

    message.append({"role": "tool", "content": function_result, "tool_call_id":tool_call.get("id")})
   
    new_mistral_params = {
        "body": json.dumps({
                "messages": message,
                "tools": tools           
        }),
        "modelId":"us.mistral.pixtral-large-2502-v1:0",
    }
    response = bedrock.invoke_model(**new_mistral_params)
    body = response.get('body').read().decode('utf-8')
    body = json.loads(body)
    print(body)
invoke_bedrock_mistral_tool()
```

------