Customizing Amazon Nova 2.0 models
You can customize Amazon Nova 2.0 models with Amazon Bedrock or SageMaker AI, depending on the requirements of your use case, to improve model performance and create a better customer experience.
Customization for the Amazon Nova 2.0 models is provided with responsible AI considerations. The following table summarizes the availability of customization and distillation for Amazon Nova 2.0.
Model Name |
Model ID |
Amazon Bedrock Fine-tuning |
Amazon Bedrock Distillation |
SageMaker AI Training Job |
SageMaker AI HyperPod |
|---|---|---|---|---|---|
Nova 2 Lite 2.0 |
amazon.nova-lite-v2:0 |
Yes |
Student |
Yes |
Yes |
2.0 Preview |
amazon.nova-pro-v2:0 |
No |
Teacher |
No |
No |
Amazon Nova Sonic |
amazon.nova-sonic-v1:0 |
No |
No |
No |
No |
amazon.nova-omni-v1:0 |
No |
No |
No |
No |
The following table summarizes the training recipe options available. The table includes information about both the service you can use and the inference technique available.
Training recipe |
Amazon Bedrock |
SageMaker AI Training Jobs |
SageMaker AI HyperPod |
On demand |
Provision throughput |
|---|---|---|---|---|---|
Parameter-efficient supervised fine-tuning |
Yes |
Yes |
Yes |
Yes |
Yes |
Full rank supervised fine-tuning |
No |
Yes |
Yes |
No |
Yes |
Parameter-efficient fine-tuning Direct Preference Optimization |
No |
Yes |
Yes |
Yes |
Yes |
Full rank Direct Preference Optimization |
No |
Yes |
Yes |
No |
Yes |
Proximal policy optimization reinforcement learning |
No |
No |
Yes |
No |
Yes |
Distillation - 2.0 as teacher |
Yes |
No |
Yes |
Yes |
Yes |
Continuous pre-training |
No |
No |
Yes |
No |
Yes |
Topics
Customization overview
Model customization allows you to specialize Amazon Nova models for your domain, use cases and quality requirements. You can choose from several customization techniques and platforms based on your technical requirements, data availability and desired outcomes.
Customization techniques:
Continued Pre-Training (CPT) - Teach models domain-specific knowledge using raw text data
Supervised Fine-Tuning (SFT) - Customize through input-output examples
Reinforcement Fine-Tuning (RFT) - Optimize using reward signals and human feedback
Distillation - Transfer knowledge from larger to smaller models
Customization on Amazon Bedrock
Amazon Bedrock provides a fully managed fine-tuning experience for Amazon Nova models, making it easy to customize models without managing infrastructure.
Supported methods:
- Supervised Fine-Tuning (SFT)
-
Teach models through input-output examples to customize response style, format and task-specific behavior.
- Reinforcement Fine-Tuning (RFT)
-
Maximize accuracy and align the model with real-world feedback and simulations using reward signals.
- Model Distillation
-
Transfers knowledge from larger "teacher" models to smaller "student" models. This process creates efficient models that maintain a significant portion of the original model's performance. The teacher model generates responses to diverse prompts and these outputs train the student model to produce similar results. This approach is more effective than standard fine-tuning when you lack sufficient high-quality labeled data.
Note
For implementation details on distillation, see Model distillation.
Key features:
Fully managed infrastructure with no cluster setup required
Simple API-based training job submission
Direct deployment to Amazon Bedrock inference endpoints
When to use Amazon Bedrock fine-tuning:
You need quick customization with minimal setup
Your use case fits standard fine-tuning patterns
You prefer flexible customization from simple to increasingly complex training
You want seamless integration with Amazon Bedrock inference
For detailed instructions, see the Amazon Bedrock documentation.
Customization on SageMaker AI
SageMaker AI provides advanced training capabilities when you need full control over the customization process, access to multiple training methods and the ability to build simple to increasingly complex training pipelines.
Available training methods:
- Continued Pre-Training (CPT)
-
Teaches models domain-specific knowledge at scale using raw text data. Ideal for specialized technical fields, legal documents, medical literature, or any domain with unique terminology and concepts. Requires large volumes of unlabeled text (billions of tokens recommended).
- Supervised Fine-Tuning (SFT)
-
Customizes models through direct input-output examples. Best for teaching specific response styles, formats and task behaviors. Supports text, image and video inputs. Requires 100+ examples (2,000-10,000 recommended for optimal results).
- Reinforcement Fine-Tuning (RFT)
-
Optimizes models using reward signals for complex problem-solving tasks like mathematical reasoning, code generation and scientific analysis. Supports both single-turn (Lambda-based) and multi-turn (custom infrastructure) scenarios. Best used after SFT establishes baseline capabilities.
- Model Distillation
-
Transfers knowledge from larger "teacher" models to smaller "student" models. This process creates efficient models that maintain a significant portion of the original model's performance. The teacher model generates responses to diverse prompts and these outputs train the student model to produce similar results. This approach is more effective than standard fine-tuning when you lack sufficient high-quality labeled data.
Note
For implementation details on distillation, see Model distillation.
Advanced capabilities:
Iterative training - Chain multiple training methods (for example, SFT to RFT) with checkpoint reuse for targeted improvements
Reasoning Mode Support - Train Nova 2 models with explicit reasoning steps for complex analytical tasks
Infrastructure options:
- SageMaker AI Training Jobs
-
Managed training with automatic resource provisioning for streamlined model customization workflows.
- SageMaker AI HyperPod
-
Resilient, large-scale training clusters for enterprise workloads requiring maximum control and scale.
Choosing the right customization approach
To decide the best training approach for your use case, consider what each method is best suited for:
- Supervised Fine-Tuning (SFT)
-
Best for teaching specific response styles and domain knowledge. For standard SFT capabilities, see Amazon Nova customization on SageMaker training jobs.
With Nova Forge, you can access advanced data mixing capabilities to combine your custom datasets with Amazon's proprietary training data.
- Reinforcement Fine-Tuning (RFT)
-
Best for aligning model behavior with complex preferences using measurable feedback.
With Nova Forge, you can access multi-turn RFT with bring-your-own-orchestration (BYOO) capabilities.
- Continued Pre-Training (CPT)
-
Best for teaching domain knowledge at scale. For standard CPT capabilities, see Continued pre-training for Amazon Nova.
With Nova Forge, you can access intermediate checkpoints and data mixing for domain-specific pre-training.