

# Larger failure modes
<a name="larger-failure-modes"></a>

 To design HA architectures to mitigate larger failure modes like rack, data center, Availability Zone (AZ), or Region failures, you should deploy multiple Outposts with sufficient infrastructure capacity in separate data centers with independent power and WAN connectivity. You anchor the Outposts to different Availability Zones (AZs) within an AWS Region or across multiple Regions. You should also provision resilient and sufficient site-to-site connectivity between the locations to support synchronous or asynchronous data replication and workload traffic redirection. Depending on your application architecture, you can use globally available [Amazon Route 53](https://aws.amazon.com/route53/) DNS and [Amazon Route 53 on Outposts](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/outpost-resolver.html) to direct traffic to the desired location, and automate traffic redirection to surviving locations in the event of large-scale failures.

# Outposts Rack Intra-VPC routing
<a name="intra-vpc-routing"></a>

AWS Outposts rack supports [intra-VPC communication across multiple Outposts](https://aws.amazon.com/blogs/compute/introducing-intra-vpc-communication-across-multiple-outposts-with-direct-vpc-routing/). Resources on two separate logical Outposts can communicate with each other by routing traffic between subnets within the same VPC spanning across them using the Outpost local gateways (LGW). With intra-VPC communication across multiple Outposts, you can override the Local Route in your Outposts subnet associated route table by adding a more specific route to the other Outposts subnet using the local LGW as the next-hop. It can provide advantages to architecting applications that requires span a VPC between two logical Outposts as [Amazon ECS across two Outposts racks](https://community.aws/content/2k5wK9P1oSC9I4ZzuSLWynsiJaa/extend-amazon-ecs-across-two-outposts-racks) or [Amazon EKS cluster across AWS Outposts](https://aws.amazon.com/blogs/containers/deploy-an-amazon-eks-cluster-across-aws-outposts-with-intra-vpc-communication/).

![\[Diagram showing network paths for single VPC with multiple logical Outposts\]](http://docs.aws.amazon.com/whitepapers/latest/aws-outposts-high-availability-design/images/page-49-single-vpc-multiple-outposts.png)


Outposts-to-Outposts traffic routing through the Region is blocked as this is an anti-pattern. Such traffic would incur egress charges in both directions and significantly higher latency than routing the traffic across the customer WAN.

# Outposts Rack Inter-VPC routing
<a name="inter-vpc-routing"></a>

Resources on two separate Outposts deployed in different VPCs can communicate each other across the customer network. Deploying this architecture enable you to route traffic Outposts-to-Outposts by your local on-premises and WAN networks adding routes towards the counterpart Outposts/VPC subnets.

![\[Diagram showing network paths for multiple VPC with multiple logical Outposts\]](http://docs.aws.amazon.com/whitepapers/latest/aws-outposts-high-availability-design/images/page-50-multiple-vpc-multiple-outposts-networking-path.png)


Recommended practices for protecting against larger failure modes:
+ Deploy multiple Outposts anchored to multiple AZs and Regions.
+ Use separate VPCs for each Outpost in a multi-Outpost deployment.

# Route 53 Local Resolver on Outposts
<a name="route53-local-resolver"></a>

When the AWS Outposts service link gets impacted by a temporary disconnect, the local DNS resolution fails, making it difficult for applications and services to discover other services, even when they are running on the same Outposts rack. However, with Route 53 Resolver on AWS Outposts, applications and services will continue to benefit from local DNS resolution to discover other services – even in the case of connectivity loss to the parent AWS Region. At the same time, for DNS resolution for on-premises host names, the Route 53 Resolver on Outposts helps to reduce latency as query results are cached and served locally, while being fully integrated with Route 53 Resolver endpoints. 

Route 53 resolver Inbound endpoints forward DNS queries they receive from outside the VPC to the Resolver running in Outposts. In contrast, Route 53 Resolver Outbound enable Route 53 Resolvers to forward DNS queries to DNS resolvers that you manage on your on-premises network as is illustrated in the following diagram. 

![\[Diagram showing Route 53 resolver on Outposts\]](http://docs.aws.amazon.com/whitepapers/latest/aws-outposts-high-availability-design/images/page-51-route53-resolver-outposts.png)


## Route 53 Resolver on Outposts considerations
<a name="route53-considerations"></a>

Consider the following:
+ You must enable the Route 53 Resolver on Outposts, and it applies to the whole Outposts deployment, even if that involves multiple compute racks under a single Outposts ID.
+ In order to enable this feature, your Outposts must have enough compute capacity to deploy the local resolver in the form of at least 4 EC2 instances of any c5.xlarge, m5.large or m5.xlarge.
+ If you are using private DNS, you must share the Private Hosted Zone with the required Outposts VPCs’ in order to cache the records locally in the Route 53 Resolver on Outposts.
+ In order to enable integration with on-premises DNS with Inbound and Outbound endpoints, your Outposts must have enough compute capacity to deploy two EC2 instances per Route53 endpoint.

# EKS Local Cluster on Outposts
<a name="eks-local-cluster"></a>

When there are disconnections of Outposts service link from the parent region, there might be challenges with services as EKS Extended Cluster, where the control plane lives in the region. Among the challenges is the loss of communication between the EKS control plane and the worker nodes and PODs. Although both worker nodes and PODs can continue to operate and service applications that resides on Outposts locally, the Kubernetes control plane may consider them unhealthy and schedule their replacement when the connection to the control plane recovers. This may lead to application downtimes when connectivity is restored.

To simplify this, there is an option to host your entire EKS cluster on Outposts. In this configuration, both the Kubernetes control plane and your worker nodes run locally on premises on your Outposts compute capacity. That way, your cluster continues to operate even in the event of a temporary drop in your service link connection and after it is restored. 

![\[Amazon EKS local cluster on Outposts\]](http://docs.aws.amazon.com/whitepapers/latest/aws-outposts-high-availability-design/images/page-52-eks-local-cluster-outposts.png)


## Amazon EKS Local Cluster on Outposts considerations
<a name="eks-local-cluster-considerations"></a>

Consider the following when you deploy an Amazon EKS local cluster in Outposts:
+ During a disconnection there are not options to execute any change in the cluster itself that requires to add new worker nodes, or auto-scale a node group, as long as it depends on EC2 and ASG API calls toward the AWS parent Region.
+ There are a set of unsupported features on local clusters listed on [eksctl AWS Outposts support.](https://eksctl.io/usage/outposts/).