Conclusion and resources
Securing agentic AI systems requires applying established security practices with AI-specific adaptations rather than entirely new approaches. The autonomous nature of these systems demands particular attention to input validation, access controls, and system recovery capabilities. Ongoing threat-modeling activities support safe expansion of system features, new model inference, wider user bases, and version uplifts. Continuous monitoring remains important as threat landscapes evolve. Organizations that establish these security foundations and establish them into ongoing operational schedules are better positioned to implement agentic AI systems safely and effectively.
Resources
The follow frameworks and publications were used as reference in developing this guide. They are also relevant to developing and operating agentic AI systems safely and securely on AWS.
AWS resources
-
Agentic AI on AWS Prescriptive Guidance
– This documentation series can help you implement agentic AI systems on AWS, including information about how to plan, design, and build these systems. -
AWS Security Reference Architecture
– This library provides technical guidance , implementation code , and a validation tool that can help you build a multi-account security architecture on AWS. -
AWS Well-Architected Framework – This framework provides architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems in the AWS Cloud.
-
AWS Well-Architected Tool – This AWS service can help you implement AWS best practices.
-
Amazon Bedrock AgentCore – This agentic platform can help you build, deploy, and operate AI agents securely at scale by using any framework and foundation model.
-
Security in Amazon Bedrock – This section of the Amazon Bedrock documentation can help you meet security and compliance objectives when building with Amazon Bedrock.
NIST resources
-
NIST AI risk management framework (RMF) playbook
– This playbook provides practical guidance and resources for implementing AI-specific risk-management practices and helps you comply with the NIST AI RMF standards. -
Secure software development practices for generative AI and dual-use foundation models
– This publication provides secure software development practices specifically for generative AI and dual-use foundation models as a community profile of the Secure Software Development Framework (SSDF). -
Security and privacy controls for information systems and organizations
– This publication provides a catalog of security and privacy controls for federal information systems and organizations to help protect against cybersecurity threats and privacy risks.
OWASP resources
-
Agentic AI threats and mitigations
– This resource documents key security threats and mitigation strategies specifically for agentic AI systems and focuses on vulnerabilities and risks. -
OWASP top 10 for LLM applications 2025
– This lists critical security vulnerabilities and risks that are specific to LLM applications and provides essential guidance for securing AI systems against emerging threats.