Supervised Fine-Tuning (Full FT, PEFT) - Amazon SageMaker AI

Supervised Fine-Tuning (Full FT, PEFT)

Fine-tuning is the process of adapting a pre-trained language model to specific tasks or domains by training it on targeted datasets. Unlike pre-training, which builds general language understanding, fine-tuning optimizes the model for particular applications.

Here's an overview of key fine-tuning techniques:

Supervised Fine-Tuning (SFT)

Supervised Fine-Tuning (SFT)

SFT adapts a pre-trained model using labeled examples of desired inputs and outputs. The model learns to generate responses that match the provided examples,effectively teaching it to follow specific instructions or produce outputs in a particular style. SFT typically involves updating all model parameters based on task-specific data.

For detailed instructions about using SFT with Amazon Nova model customization, see the Supervised fine-tuning (SFT) section from Amazon Nova user guide.

Parameter-Efficient Fine-Tuning (PEFT)

Parameter-Efficient Fine-Tuning (PEFT)

PEFT techniques like Low-Rank Adaptation (LoRA) modify only a small subset of model parameters during fine-tuning, significantly reducing computational and memory requirements. LoRA works by adding small trainable "adapter" matrices to existing model weights, allowing effective adaptation while keeping most of the original model frozen. This approach enables fine-tuning of large models on limited hardware.

For detailed instructions about using PEFT with Amazon Nova model customization, see the Parameter-efficient fine-tuning (PEFT) section from Amazon Nova user guide.