

 **Help improve this page** 

To contribute to this user guide, choose the **Edit this page on GitHub** link that is located in the right pane of every page.

# Limit Pod traffic with Kubernetes network policies
<a name="cni-network-policy"></a>

## Overview
<a name="_overview"></a>

By default, there are no restrictions in Kubernetes for IP addresses, ports, or connections between any Pods in your cluster or between your Pods and resources in any other network. You can use Kubernetes *network policy* to restrict network traffic to and from your Pods. For more information, see [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) in the Kubernetes documentation.

## Standard network policy
<a name="_standard_network_policy"></a>

You can use the standard `NetworkPolicy` to segment pod-to-pod traffic in the cluster. These network policies operate at layers 3 and 4 of the OSI network model, allowing you to control traffic flow at the IP address or port level within your Amazon EKS cluster. Standard network policies are scoped to the namespace level.

### Use cases
<a name="_use_cases"></a>
+ Segment network traffic between workloads to ensure that only related applications can talk to each other.
+ Isolate tenants at the namespace level using policies to enforce network separation.

### Example
<a name="_example"></a>

In the policy below, egress traffic from the *webapp* pods in the *sun* namespace is restricted.

```
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: webapp-egress-policy
  namespace: sun
spec:
  podSelector:
    matchLabels:
      role: webapp
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: moon
      podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 8080
  - to:
    - namespaceSelector:
        matchLabels:
          name: stars
      podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 8080
```

The policy applies to pods with the label `role: webapp` in the `sun` namespace.
+ Allowed traffic: Pods with the label `role: frontend` in the `moon` namespace on TCP port `8080` 
+ Allowed traffic: Pods with the label role: frontend in the `stars` namespace on TCP port `8080` 
+ Blocked traffic: All other outbound traffic from `webapp` pods is implicitly denied

## Admin (or cluster) network policy
<a name="_admin_or_cluster_network_policy"></a>

![\[llustration of the evaluation order for network policies in EKS\]](http://docs.aws.amazon.com/eks/latest/userguide/images/evaluation-order.png)


You can use the `ClusterNetworkPolicy` to enforce a network security standard that applies to the whole cluster. Instead of repetitively defining and maintaining a distinct policy for each namespace, you can use a single policy to centrally manage network access controls for different workloads in the cluster, irrespective of their namespace.

### Use cases
<a name="_use_cases_2"></a>
+ Centrally manage network access controls for all (or a subset of) workloads in your EKS cluster.
+ Define a default network security posture across the cluster.
+ Extend organizational security standards to the scope of the cluster in a more operationally efficient way.

### Example
<a name="_example_2"></a>

In the policy below, you can explicitly block cluster traffic from other namespaces to prevent network access to a sensitive workload namespace.

```
apiVersion: networking.k8s.aws/v1alpha1
kind: ClusterNetworkPolicy
metadata:
  name: protect-sensitive-workload
spec:
  tier: Admin
  priority: 10
  subject:
    namespaces:
      matchLabels:
        kubernetes.io/metadata.name: earth
  ingress:
    - action: Deny
      from:
      - namespaces:
          matchLabels: {} # Match all namespaces.
      name: select-all-deny-all
```

## Important notes
<a name="_important_notes"></a>

Network policies in the Amazon VPC CNI plugin for Kubernetes are supported in the configurations listed below.
+ Version 1.21.0 (or later) of Amazon VPC CNI plugin for both standard and admin network policies.
+ Cluster configured for `IPv4` or `IPv6` addresses.
+ You can use network policies with [security groups for Pods](security-groups-for-pods.md). With network policies, you can control all in-cluster communication. With security groups for Pods, you can control access to AWS services from applications within a Pod.
+ You can use network policies with *custom networking* and *prefix delegation*.

## Considerations
<a name="cni-network-policy-considerations"></a>

 **Architecture** 
+ When applying Amazon VPC CNI plugin for Kubernetes network policies to your cluster with the Amazon VPC CNI plugin for Kubernetes , you can apply the policies to Amazon EC2 Linux nodes only. You can’t apply the policies to Fargate or Windows nodes.
+ Network policies only apply either `IPv4` or `IPv6` addresses, but not both. In an `IPv4` cluster, the VPC CNI assigns `IPv4` address to pods and applies `IPv4` policies. In an `IPv6` cluster, the VPC CNI assigns `IPv6` address to pods and applies `IPv6` policies. Any `IPv4` network policy rules applied to an `IPv6` cluster are ignored. Any `IPv6` network policy rules applied to an `IPv4` cluster are ignored.

 **Network Policies** 
+ Network Policies are only applied to Pods that are part of a Deployment. Standalone Pods that don’t have a `metadata.ownerReferences` set can’t have network policies applied to them.
+ You can apply multiple network policies to the same Pod. When two or more policies that select the same Pod are configured, all policies are applied to the Pod.
+ The maximum number of combinations of ports and protocols for a single IP address range (CIDR) is 24 across all of your network policies. Selectors such as `namespaceSelector` resolve to one or more CIDRs. If multiple selectors resolve to a single CIDR or you specify the same direct CIDR multiple times in the same or different network policies, these all count toward this limit.
+ For any of your Kubernetes services, the service port must be the same as the container port. If you’re using named ports, use the same name in the service spec too.

 **Admin Network Policies** 

1.  **Admin tier policies (evaluated first)**: All Admin tier ClusterNetworkPolicies are evaluated before any other policies. Within the Admin tier, policies are processed in priority order (lowest priority number first). The action type determines what happens next.
   +  **Deny action (highest precedence)**: When an Admin policy with a Deny action matches traffic, that traffic is immediately blocked regardless of any other policies. No further ClusterNetworkPolicy or NetworkPolicy rules are processed. This ensures that organization-wide security controls cannot be overridden by namespace-level policies.
   +  **Allow action**: After Deny rules are evaluated, Admin policies with Allow actions are processed in priority order (lowest priority number first). When an Allow action matches, the traffic is accepted and no further policy evaluation occurs. These policies can grant access across multiple namespaces based on label selectors, providing centralized control over which workloads can access specific resources.
   +  **Pass action**: Pass actions in Admin tier policies delegate decision-making to lower tiers. When traffic matches a Pass rule, evaluation skips all remaining Admin tier rules for that traffic and proceeds directly to the NetworkPolicy tier. This allows administrators to explicitly delegate control for certain traffic patterns to application teams. For example, you might use Pass rules to delegate intra-namespace traffic management to namespace administrators while maintaining strict controls over external access.

1.  **Network policy tier**: If no Admin tier policy matches with Deny or Allow, or if a Pass action was matched, namespace-scoped NetworkPolicy resources are evaluated next. These policies provide fine-grained control within individual namespaces and are managed by application teams. Namespace-scoped policies can only be more restrictive than Admin policies. They cannot override an Admin policy’s Deny decision, but they can further restrict traffic that was allowed or passed by Admin policies.

1.  **Baseline tier Admin policies**: If no Admin or namespace-scoped policies match the traffic, Baseline tier ClusterNetworkPolicies are evaluated. These provide default security postures that can be overridden by namespace-scoped policies, allowing administrators to set organization-wide defaults while giving teams flexibility to customize as needed. Baseline policies are evaluated in priority order (lowest priority number first).

1.  **Default deny (if no policies match)**: This deny-by-default behavior ensures that only explicitly permitted connections are allowed, maintaining a strong security posture.

 **Migration** 
+ If your cluster is currently using a third party solution to manage Kubernetes network policies, you can use those same policies with the Amazon VPC CNI plugin for Kubernetes. However you must remove your existing solution so that it isn’t managing the same policies.

**Warning**  
We recommend that after you remove a network policy solution, then you replace all of the nodes that had the network policy solution applied to them. This is because the traffic rules might get left behind by a pod of the solution if it exits suddenly.

 **Installation** 
+ The network policy feature creates and requires a `PolicyEndpoint` Custom Resource Definition (CRD) called `policyendpoints.networking.k8s.aws`. `PolicyEndpoint` objects of the Custom Resource are managed by Amazon EKS. You shouldn’t modify or delete these resources.
+ If you run pods that use the instance role IAM credentials or connect to the EC2 IMDS, be careful to check for network policies that would block access to the EC2 IMDS. You may need to add a network policy to allow access to EC2 IMDS. For more information, see [Instance metadata and user data](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html) in the Amazon EC2 User Guide.

  Pods that use *IAM roles for service accounts* or *EKS Pod Identity* don’t access EC2 IMDS.
+ The Amazon VPC CNI plugin for Kubernetes doesn’t apply network policies to additional network interfaces for each pod, only the primary interface for each pod (`eth0`). This affects the following architectures:
  +  `IPv6` pods with the `ENABLE_V4_EGRESS` variable set to `true`. This variable enables the `IPv4` egress feature to connect the IPv6 pods to `IPv4` endpoints such as those outside the cluster. The `IPv4` egress feature works by creating an additional network interface with a local loopback IPv4 address.
  + When using chained network plugins such as Multus. Because these plugins add network interfaces to each pod, network policies aren’t applied to the chained network plugins.

# Restrict Pod network traffic with Kubernetes network policies
<a name="cni-network-policy-configure"></a>

You can use a Kubernetes network policy to restrict network traffic to and from your Pods. For more information, see [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) in the Kubernetes documentation.

You must configure the following in order to use this feature:

1. Set up policy enforcement at Pod startup. You do this in the `aws-node` container of the VPC CNI `DaemonSet`.

1. Enable the network policy parameter for the add-on.

1. Configure your cluster to use the Kubernetes network policy

Before you begin, review the considerations. For more information, see [Considerations](cni-network-policy.md#cni-network-policy-considerations).

## Prerequisites
<a name="cni-network-policy-prereqs"></a>

The following are prerequisites for the feature:

### Minimum cluster version
<a name="cni-network-policy-minimum"></a>

An existing Amazon EKS cluster. To deploy one, see [Get started with Amazon EKS](getting-started.md). The cluster must be running one of the Kubernetes versions and platform versions listed in the following table. Note that any Kubernetes and platform versions later than those listed are also supported. You can check your current Kubernetes version by replacing *my-cluster* in the following command with the name of your cluster and then running the modified command:

```
aws eks describe-cluster --name my-cluster --query cluster.version --output text
```


| Kubernetes version | Platform version | 
| --- | --- | 
|   `1.27.4`   |   `eks.5`   | 
|   `1.26.7`   |   `eks.6`   | 

### Minimum VPC CNI version
<a name="cni-network-policy-minimum-vpc"></a>

To create both standard Kubernetes network policies and admin network policies, you need to run version `1.21` of the VPC CNI plugin. You can see which version that you currently have with the following command.

```
kubectl describe daemonset aws-node --namespace kube-system | grep amazon-k8s-cni: | cut -d : -f 3
```

If your version is earlier than `1.21`, see [Update the Amazon VPC CNI (Amazon EKS add-on)](vpc-add-on-update.md) to upgrade to version `1.21` or later.

### Minimum Linux kernel version
<a name="cni-network-policy-minimum-linux"></a>

Your nodes must have Linux kernel version `5.10` or later. You can check your kernel version with `uname -r`. If you’re using the latest versions of the Amazon EKS optimized Amazon Linux, Amazon EKS optimized accelerated Amazon Linux AMIs, and Bottlerocket AMIs, they already have the required kernel version.

The Amazon EKS optimized accelerated Amazon Linux AMI version `v20231116` or later have kernel version `5.10`.

## Step 1: Set up policy enforcement at Pod startup
<a name="cni-network-policy-configure-policy"></a>

The Amazon VPC CNI plugin for Kubernetes configures network policies for pods in parallel with the pod provisioning. Until all of the policies are configured for the new pod, containers in the new pod will start with a *default allow policy*. This is called *standard mode*. A default allow policy means that all ingress and egress traffic is allowed to and from the new pods. For example, the pods will not have any firewall rules enforced (all traffic is allowed) until the new pod is updated with the active policies.

With the `NETWORK_POLICY_ENFORCING_MODE` variable set to `strict`, pods that use the VPC CNI start with a *default deny policy*, then policies are configured. This is called *strict mode*. In strict mode, you must have a network policy for every endpoint that your pods need to access in your cluster. Note that this requirement applies to the CoreDNS pods. The default deny policy isn’t configured for pods with Host networking.

You can change the default network policy by setting the environment variable `NETWORK_POLICY_ENFORCING_MODE` to `strict` in the `aws-node` container of the VPC CNI `DaemonSet`.

```
env:
  - name: NETWORK_POLICY_ENFORCING_MODE
    value: "strict"
```

## Step 2: Enable the network policy parameter for the add-on
<a name="enable-network-policy-parameter"></a>

The network policy feature uses port `8162` on the node for metrics by default. Also, the feature uses port `8163` for health probes. If you run another application on the nodes or inside pods that needs to use these ports, the app fails to run. From VPC CNI version `v1.14.1` or later, you can change these ports.

Use the following procedure to enable the network policy parameter for the add-on.

### AWS Management Console
<a name="cni-network-policy-console"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, select **Clusters**, and then select the name of the cluster that you want to configure the Amazon VPC CNI add-on for.

1. Choose the **Add-ons** tab.

1. Select the box in the top right of the add-on box and then choose **Edit**.

1. On the **Configure `Amazon VPC CNI` ** page:

   1. Select a `v1.14.0-eksbuild.3` or later version in the **Version** list.

   1. Expand the **Optional configuration settings**.

   1. Enter the JSON key `"enableNetworkPolicy":` and value `"true"` in **Configuration values**. The resulting text must be a valid JSON object. If this key and value are the only data in the text box, surround the key and value with curly braces `{ }`.

      The following example has network policy feature enabled and metrics and health probes are set to the default port numbers:

      ```
      {
          "enableNetworkPolicy": "true",
          "nodeAgent": {
              "healthProbeBindAddr": "8163",
              "metricsBindAddr": "8162"
          }
      }
      ```

### Helm
<a name="cni-network-helm"></a>

If you have installed the Amazon VPC CNI plugin for Kubernetes through `helm`, you can update the configuration to change the ports.

1. Run the following command to change the ports. Set the port number in the value for either key `nodeAgent.metricsBindAddr` or key `nodeAgent.healthProbeBindAddr`, respectively.

   ```
   helm upgrade --set nodeAgent.metricsBindAddr=8162 --set nodeAgent.healthProbeBindAddr=8163 aws-vpc-cni --namespace kube-system eks/aws-vpc-cni
   ```

### kubectl
<a name="cni-network-policy-kubectl"></a>

1. Open the `aws-node` `DaemonSet` in your editor.

   ```
   kubectl edit daemonset -n kube-system aws-node
   ```

1. Replace the port numbers in the following command arguments in the `args:` in the `aws-network-policy-agent` container in the VPC CNI `aws-node` daemonset manifest.

   ```
       - args:
               - --metrics-bind-addr=:8162
               - --health-probe-bind-addr=:8163
   ```

## Step 3: Configure your cluster to use Kubernetes network policies
<a name="cni-network-policy-setup"></a>

You can set this for an Amazon EKS add-on or self-managed add-on.

### Amazon EKS add-on
<a name="cni-network-policy-setup-procedure-add-on"></a>

Using the AWS CLI, you can configure the cluster to use Kubernetes network policies by running the following command. Replace `my-cluster` with the name of your cluster and the IAM role ARN with the role that you are using.

```
aws eks update-addon --cluster-name my-cluster --addon-name vpc-cni --addon-version v1.14.0-eksbuild.3 \
    --service-account-role-arn arn:aws:iam::123456789012:role/AmazonEKSVPCCNIRole \
    --resolve-conflicts PRESERVE --configuration-values '{"enableNetworkPolicy": "true"}'
```

To configure this using the AWS Management Console, follow the below steps:

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, select **Clusters**, and then select the name of the cluster that you want to configure the Amazon VPC CNI add-on for.

1. Choose the **Add-ons** tab.

1. Select the box in the top right of the add-on box and then choose **Edit**.

1. On the **Configure `Amazon VPC CNI` ** page:

   1. Select a `v1.14.0-eksbuild.3` or later version in the **Version** list.

   1. Expand the **Optional configuration settings**.

   1. Enter the JSON key `"enableNetworkPolicy":` and value `"true"` in **Configuration values**. The resulting text must be a valid JSON object. If this key and value are the only data in the text box, surround the key and value with curly braces `{ }`. The following example shows network policy is enabled:

      ```
      { "enableNetworkPolicy": "true" }
      ```

      The following screenshot shows an example of this scenario.  
![\[<shared id="consolelong"/> showing the VPC CNI add-on with network policy in the optional configuration.\]](http://docs.aws.amazon.com/eks/latest/userguide/images/console-cni-config-network-policy.png)

### Self-managed add-on
<a name="cni-network-policy-setup-procedure-self-managed-add-on"></a>

If you have installed the Amazon VPC CNI plugin for Kubernetes through `helm`, you can update the configuration to enable network policy.

1. Run the following command to enable network policy.

   ```
   helm upgrade --set enableNetworkPolicy=true aws-vpc-cni --namespace kube-system eks/aws-vpc-cni
   ```

1. Open the `amazon-vpc-cni` `ConfigMap` in your editor.

   ```
   kubectl edit configmap -n kube-system amazon-vpc-cni -o yaml
   ```

1. Add the following line to the `data` in the `ConfigMap`.

   ```
   enable-network-policy-controller: "true"
   ```

   Once you’ve added the line, your `ConfigMap` should look like the following example.

   ```
   apiVersion: v1
    kind: ConfigMap
    metadata:
     name: amazon-vpc-cni
     namespace: kube-system
    data:
     enable-network-policy-controller: "true"
   ```

1. Open the `aws-node` `DaemonSet` in your editor.

   ```
   kubectl edit daemonset -n kube-system aws-node
   ```

   1. Replace the `false` with `true` in the command argument `--enable-network-policy=false` in the `args:` in the `aws-network-policy-agent` container in the VPC CNI `aws-node` daemonset manifest.

      ```
           - args:
              - --enable-network-policy=true
      ```

## Step 4. Next steps
<a name="cni-network-policy-setup-procedure-confirm"></a>

After you complete the configuration, confirm that the `aws-node` pods are running on your cluster.

```
kubectl get pods -n kube-system | grep 'aws-node\|amazon'
```

An example output is as follows.

```
aws-node-gmqp7                                          2/2     Running   1 (24h ago)   24h
aws-node-prnsh                                          2/2     Running   1 (24h ago)   24h
```

There are 2 containers in the `aws-node` pods in versions `1.14` and later. In previous versions and if network policy is disabled, there is only a single container in the `aws-node` pods.

You can now deploy Kubernetes network policies to your cluster.

To implement Kubernetes network policies, you can create Kubernetes `NetworkPolicy` or `ClusterNetworkPolicy` objects and deploy them to your cluster. `NetworkPolicy` objects are scoped to a namespace, while `ClusterNetworkPolicy` objects can be scoped to the whole cluster or multiple namespaces. You implement policies to allow or deny traffic between Pods based on label selectors, namespaces, and IP address ranges. For more information about creating `NetworkPolicy` objects, see [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/#networkpolicy-resource) in the Kubernetes documentation.

Enforcement of Kubernetes `NetworkPolicy` objects is implemented using the Extended Berkeley Packet Filter (eBPF). Relative to `iptables` based implementations, it offers lower latency and performance characteristics, including reduced CPU utilization and avoiding sequential lookups. Additionally, eBPF probes provide access to context rich data that helps debug complex kernel level issues and improve observability. Amazon EKS supports an eBPF-based exporter that leverages the probes to log policy results on each node and export the data to external log collectors to aid in debugging. For more information, see the [eBPF documentation](https://ebpf.io/what-is-ebpf/#what-is-ebpf).

# Disable Kubernetes network policies for Amazon EKS Pod network traffic
<a name="network-policy-disable"></a>

Disable Kubernetes network policies to stop restricting Amazon EKS Pod network traffic

1. List all Kubernetes network policies.

   ```
   kubectl get netpol -A
   ```

1. Delete each Kubernetes network policy. You must delete all network policies before disabling network policies.

   ```
   kubectl delete netpol <policy-name>
   ```

1. Open the aws-node DaemonSet in your editor.

   ```
   kubectl edit daemonset -n kube-system aws-node
   ```

1. Replace the `true` with `false` in the command argument `--enable-network-policy=true` in the `args:` in the `aws-network-policy-agent` container in the VPC CNI `aws-node` daemonset manifest.

   ```
        - args:
           - --enable-network-policy=true
   ```

# Troubleshooting Kubernetes network policies For Amazon EKS
<a name="network-policies-troubleshooting"></a>

This is the troubleshooting guide for network policy feature of the Amazon VPC CNI.

This guide covers:
+ Install information, CRD and RBAC permissions [New `policyendpoints` CRD and permissions](#network-policies-troubleshooting-permissions) 
+ Logs to examine when diagnosing network policy problems [Network policy logs](#network-policies-troubleshooting-flowlogs) 
+ Running the eBPF SDK collection of tools to troubleshoot
+ Known issues and solutions [Known issues and solutions](#network-policies-troubleshooting-known-issues) 

**Note**  
Note that network policies are only applied to pods that are made by Kubernetes *Deployments*. For more limitations of the network policies in the VPC CNI, see [Considerations](cni-network-policy.md#cni-network-policy-considerations).

You can troubleshoot and investigate network connections that use network policies by reading the [Network policy logs](#network-policies-troubleshooting-flowlogs) and by running tools from the [eBPF SDK](#network-policies-ebpf-sdk).

## New `policyendpoints` CRD and permissions
<a name="network-policies-troubleshooting-permissions"></a>
+ CRD: `policyendpoints.networking.k8s.aws` 
+ Kubernetes API: `apiservice` called `v1.networking.k8s.io` 
+ Kubernetes resource: `Kind: NetworkPolicy` 
+ RBAC: `ClusterRole` called `aws-node` (VPC CNI), `ClusterRole` called `eks:network-policy-controller` (network policy controller in EKS cluster control plane)

For network policy, the VPC CNI creates a new `CustomResourceDefinition` (CRD) called `policyendpoints.networking.k8s.aws`. The VPC CNI must have permissions to create the CRD and create CustomResources (CR) of this and the other CRD installed by the VPC CNI (`eniconfigs.crd.k8s.amazonaws.com`). Both of the CRDs are available in the [`crds.yaml` file](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/charts/aws-vpc-cni/crds/customresourcedefinition.yaml) on GitHub. Specifically, the VPC CNI must have "get", "list", and "watch" verb permissions for `policyendpoints`.

The Kubernetes *Network Policy* is part of the `apiservice` called `v1.networking.k8s.io`, and this is `apiversion: networking.k8s.io/v1` in your policy YAML files. The VPC CNI `DaemonSet` must have permissions to use this part of the Kubernetes API.

The VPC CNI permissions are in a `ClusterRole` called `aws-node`. Note that `ClusterRole` objects aren’t grouped in namespaces. The following shows the `aws-node` of a cluster:

```
kubectl get clusterrole aws-node -o yaml
```

```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/instance: aws-vpc-cni
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: aws-node
    app.kubernetes.io/version: v1.19.4
    helm.sh/chart: aws-vpc-cni-1.19.4
    k8s-app: aws-node
  name: aws-node
rules:
- apiGroups:
  - crd.k8s.amazonaws.com
  resources:
  - eniconfigs
  verbs:
  - list
  - watch
  - get
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - list
  - watch
  - get
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - list
  - watch
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
  - get
- apiGroups:
  - ""
  - events.k8s.io
  resources:
  - events
  verbs:
  - create
  - patch
  - list
- apiGroups:
  - networking.k8s.aws
  resources:
  - policyendpoints
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.aws
  resources:
  - policyendpoints/status
  verbs:
  - get
- apiGroups:
  - vpcresources.k8s.aws
  resources:
  - cninodes
  verbs:
  - get
  - list
  - watch
  - patch
```

Also, a new controller runs in the control plane of each EKS cluster. The controller uses the permissions of the `ClusterRole` called `eks:network-policy-controller`. The following shows the `eks:network-policy-controller` of a cluster:

```
kubectl get clusterrole eks:network-policy-controller -o yaml
```

```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/name: amazon-network-policy-controller-k8s
  name: eks:network-policy-controller
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.aws
  resources:
  - policyendpoints
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - networking.k8s.aws
  resources:
  - policyendpoints/finalizers
  verbs:
  - update
- apiGroups:
  - networking.k8s.aws
  resources:
  - policyendpoints/status
  verbs:
  - get
  - patch
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - networkpolicies
  verbs:
  - get
  - list
  - patch
  - update
  - watch
```

## Network policy logs
<a name="network-policies-troubleshooting-flowlogs"></a>

Each decision by the VPC CNI whether connections are allowed or denied by a network policies is logged in *flow logs*. The network policy logs on each node include the flow logs for every pod that has a network policy. Network policy logs are stored at `/var/log/aws-routed-eni/network-policy-agent.log`. The following example is from a `network-policy-agent.log` file:

```
{"level":"info","timestamp":"2023-05-30T16:05:32.573Z","logger":"ebpf-client","msg":"Flow Info: ","Src
IP":"192.168.87.155","Src Port":38971,"Dest IP":"64.6.160","Dest
Port":53,"Proto":"UDP","Verdict":"ACCEPT"}
```

Network policy logs are disabled by default. To enable the network policy logs, follow these steps:

**Note**  
Network policy logs require an additional 1 vCPU for the `aws-network-policy-agent` container in the VPC CNI `aws-node` `DaemonSet` manifest.

### Amazon EKS add-on
<a name="cni-network-policy-flowlogs-addon"></a>

 ** AWS Management Console **   

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, select **Clusters**, and then select the name of the cluster that you want to configure the Amazon VPC CNI add-on for.

1. Choose the **Add-ons** tab.

1. Select the box in the top right of the add-on box and then choose **Edit**.

1. On the **Configure *Amazon VPC CNI* ** page:

   1. Select a `v1.14.0-eksbuild.3` or later version in the **Version** dropdown list.

   1. Expand the **Optional configuration settings**.

   1. Enter the top-level JSON key `"nodeAgent":` and value is an object with a key `"enablePolicyEventLogs":` and value of `"true"` in **Configuration values**. The resulting text must be a valid JSON object. The following example shows network policy and the network policy logs are enabled, and the network policy logs are sent to CloudWatch Logs:

      ```
      {
          "enableNetworkPolicy": "true",
          "nodeAgent": {
              "enablePolicyEventLogs": "true"
          }
      }
      ```

The following screenshot shows an example of this scenario.

![\[<shared id="consolelong"/> showing the VPC CNI add-on with network policy and CloudWatch Logs in the optional configuration.\]](http://docs.aws.amazon.com/eks/latest/userguide/images/console-cni-config-network-policy-logs.png)


 AWS CLI  

1. Run the following AWS CLI command. Replace `my-cluster` with the name of your cluster and replace the IAM role ARN with the role that you are using.

   ```
   aws eks update-addon --cluster-name my-cluster --addon-name vpc-cni --addon-version v1.14.0-eksbuild.3 \
       --service-account-role-arn arn:aws:iam::123456789012:role/AmazonEKSVPCCNIRole \
       --resolve-conflicts PRESERVE --configuration-values '{"nodeAgent": {"enablePolicyEventLogs": "true"}}'
   ```

### Self-managed add-on
<a name="cni-network-policy-flowlogs-selfmanaged"></a>

Helm  
If you have installed the Amazon VPC CNI plugin for Kubernetes through `helm`, you can update the configuration to write the network policy logs.  

1. Run the following command to enable network policy.

   ```
   helm upgrade --set nodeAgent.enablePolicyEventLogs=true aws-vpc-cni --namespace kube-system eks/aws-vpc-cni
   ```

kubectl  
If you have installed the Amazon VPC CNI plugin for Kubernetes through `kubectl`, you can update the configuration to write the network policy logs.  

1. Open the `aws-node` `DaemonSet` in your editor.

   ```
   kubectl edit daemonset -n kube-system aws-node
   ```

1. Replace the `false` with `true` in the command argument `--enable-policy-event-logs=false` in the `args:` in the `aws-network-policy-agent` container in the VPC CNI `aws-node` `DaemonSet` manifest.

   ```
        - args:
           - --enable-policy-event-logs=true
   ```

### Send network policy logs to Amazon CloudWatch Logs
<a name="network-policies-cloudwatchlogs"></a>

You can monitor the network policy logs using services such as Amazon CloudWatch Logs. You can use the following methods to send the network policy logs to CloudWatch Logs.

For EKS clusters, the policy logs will be located under `/aws/eks/cluster-name/cluster/` and for self-managed K8S clusters, the logs will be placed under `/aws/k8s-cluster/cluster/`.

#### Send network policy logs with Amazon VPC CNI plugin for Kubernetes
<a name="network-policies-cwl-agent"></a>

If you enable network policy, a second container is add to the `aws-node` pods for a *node agent*. This node agent can send the network policy logs to CloudWatch Logs.

**Note**  
Only the network policy logs are sent by the node agent. Other logs made by the VPC CNI aren’t included.

##### Prerequisites
<a name="cni-network-policy-cwl-agent-prereqs"></a>
+ Add the following permissions as a stanza or separate policy to the IAM role that you are using for the VPC CNI.

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "VisualEditor0",
              "Effect": "Allow",
              "Action": [
                  "logs:DescribeLogGroups",
                  "logs:CreateLogGroup",
                  "logs:CreateLogStream",
                  "logs:PutLogEvents"
              ],
              "Resource": "*"
          }
      ]
  }
  ```

##### Amazon EKS add-on
<a name="cni-network-policy-cwl-agent-addon"></a>

 ** AWS Management Console **   

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, select **Clusters**, and then select the name of the cluster that you want to configure the Amazon VPC CNI add-on for.

1. Choose the **Add-ons** tab.

1. Select the box in the top right of the add-on box and then choose **Edit**.

1. On the **Configure *Amazon VPC CNI* ** page:

   1. Select a `v1.14.0-eksbuild.3` or later version in the **Version** dropdown list.

   1. Expand the **Optional configuration settings**.

   1. Enter the top-level JSON key `"nodeAgent":` and value is an object with a key `"enableCloudWatchLogs":` and value of `"true"` in **Configuration values**. The resulting text must be a valid JSON object. The following example shows network policy and the network policy logs are enabled, and the logs are sent to CloudWatch Logs:

      ```
      {
          "enableNetworkPolicy": "true",
          "nodeAgent": {
              "enablePolicyEventLogs": "true",
              "enableCloudWatchLogs": "true",
          }
      }
      ```
The following screenshot shows an example of this scenario.

![\[<shared id="consolelong"/> showing the VPC CNI add-on with network policy and CloudWatch Logs in the optional configuration.\]](http://docs.aws.amazon.com/eks/latest/userguide/images/console-cni-config-network-policy-logs-cwl.png)


 ** AWS CLI**   

1. Run the following AWS CLI command. Replace `my-cluster` with the name of your cluster and replace the IAM role ARN with the role that you are using.

   ```
   aws eks update-addon --cluster-name my-cluster --addon-name vpc-cni --addon-version v1.14.0-eksbuild.3 \
       --service-account-role-arn arn:aws:iam::123456789012:role/AmazonEKSVPCCNIRole \
       --resolve-conflicts PRESERVE --configuration-values '{"nodeAgent": {"enablePolicyEventLogs": "true", "enableCloudWatchLogs": "true"}}'
   ```

##### Self-managed add-on
<a name="cni-network-policy-cwl-agent-selfmanaged"></a>

 **Helm**   
If you have installed the Amazon VPC CNI plugin for Kubernetes through `helm`, you can update the configuration to send network policy logs to CloudWatch Logs.  

1. Run the following command to enable network policy logs and send them to CloudWatch Logs.

   ```
   helm upgrade --set nodeAgent.enablePolicyEventLogs=true --set nodeAgent.enableCloudWatchLogs=true aws-vpc-cni --namespace kube-system eks/aws-vpc-cni
   ```

 **kubectl**   

1. Open the `aws-node` `DaemonSet` in your editor.

   ```
   kubectl edit daemonset -n kube-system aws-node
   ```

1. Replace the `false` with `true` in two command arguments `--enable-policy-event-logs=false` and `--enable-cloudwatch-logs=false` in the `args:` in the `aws-network-policy-agent` container in the VPC CNI `aws-node` `DaemonSet` manifest.

   ```
        - args:
           - --enable-policy-event-logs=true
           - --enable-cloudwatch-logs=true
   ```

#### Send network policy logs with a Fluent Bit `DaemonSet`
<a name="network-policies-cwl-fluentbit"></a>

If you are using Fluent Bit in a `DaemonSet` to send logs from your nodes, you can add configuration to include the network policy logs from network policies. You can use the following example configuration:

```
    [INPUT]
        Name              tail
        Tag               eksnp.*
        Path              /var/log/aws-routed-eni/network-policy-agent*.log
        Parser            json
        DB                /var/log/aws-routed-eni/flb_npagent.db
        Mem_Buf_Limit     5MB
        Skip_Long_Lines   On
        Refresh_Interval  10
```

## Included eBPF SDK
<a name="network-policies-ebpf-sdk"></a>

The Amazon VPC CNI plugin for Kubernetes installs eBPF SDK collection of tools on the nodes. You can use the eBPF SDK tools to identify issues with network policies. For example, the following command lists the programs that are running on the node.

```
sudo /opt/cni/bin/aws-eks-na-cli ebpf progs
```

To run this command, you can use any method to connect to the node.

## Known issues and solutions
<a name="network-policies-troubleshooting-known-issues"></a>

The following sections describe known issues with the Amazon VPC CNI network policy feature and their solutions.

### Network policy logs generated despite enable-policy-event-logs set to false
<a name="network-policies-troubleshooting-policy-event-logs"></a>

 **Issue**: EKS VPC CNI is generating network policy logs even when the `enable-policy-event-logs` setting is set to `false`.

 **Solution**: The `enable-policy-event-logs` setting only disables the policy "decision" logs, but it won’t disable all Network Policy agent logging. This behavior is documented in the [aws-network-policy-agent README](https://github.com/aws/aws-network-policy-agent/) on GitHub. To completely disable logging, you might need to adjust other logging configurations.

### Network policy map cleanup issues
<a name="network-policies-troubleshooting-map-cleanup"></a>

 **Issue**: Problems with network `policyendpoint` still existing and not being cleaned up after pods are deleted.

 **Solution**: This issue was caused by a problem with the VPC CNI add-on version 1.19.3-eksbuild.1. Update to a newer version of the VPC CNI add-on to resolve this issue.

### Network policies aren’t applied
<a name="network-policies-troubleshooting-policyendpoint"></a>

 **Issue**: Network policy feature is enabled in the Amazon VPC CNI plugin, but network policies are not being applied correctly.

If you make a network policy `kind: NetworkPolicy` and it doesn’t effect the pod, check that the policyendpoint object was created in the same namespace as the pod. If there aren’t `policyendpoint` objects in the namespaces, the network policy controller (part of the EKS cluster) was unable to create network policy rules for the network policy agent (part of the VPC CNI) to apply.

 **Solution**: The solution is to fix the permissions of the VPC CNI (`ClusterRole` : `aws-node`) and the network policy controller (`ClusterRole` : `eks:network-policy-controller`) and to allow these actions in any policy enforcement tool such as Kyverno. Ensure that Kyverno policies are not blocking the creation of `policyendpoint` objects. See previous section for the permissions necessary permissions in [New `policyendpoints` CRD and permissions](#network-policies-troubleshooting-permissions).

### Pods don’t return to default deny state after policy deletion in strict mode
<a name="network-policies-troubleshooting-strict-mode-fallback"></a>

 **Issue**: When network policies are enabled in strict mode, pods start with a default deny policy. After policies are applied, traffic is allowed to the specified endpoints. However, when policies are deleted, the pod doesn’t return to the default deny state and instead goes to a default allow state.

 **Solution**: This issue was fixed in the VPC CNI release 1.19.3, which included the network policy agent 1.2.0 release. After the fix, with strict mode enabled, once policies are removed, the pod will fall back to the default deny state as expected.

### Security Groups for Pods startup latency
<a name="network-policies-troubleshooting-sgfp-latency"></a>

 **Issue**: When using the Security Groups for Pods feature in EKS, there is increased pod startup latency.

 **Solution**: The latency is due to rate limiting in the resource controller from API throttling on the `CreateNetworkInterface` API, which the VPC resource controller uses to create branch ENIs for the pods. Check your account’s API limits for this operation and consider requesting a limit increase if needed.

### FailedScheduling due to insufficient vpc.amazonaws.com/pod-eni
<a name="network-policies-troubleshooting-insufficient-pod-eni"></a>

 **Issue**: Pods fail to schedule with the error: `FailedScheduling 2m53s (x28 over 137m) default-scheduler 0/5 nodes are available: 5 Insufficient vpc.amazonaws.com/pod-eni. preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.` 

 **Solution**: As with the previous issue, assigning Security Groups to pods increases pod scheduling latency and it can increase beyond the CNI threshold for time to add each ENI, causing failures to start pods. This is expected behavior when using Security Groups for Pods. Consider the scheduling implications when designing your workload architecture.

### IPAM connectivity issues and segmentation faults
<a name="network-policies-troubleshooting-systemd-udev"></a>

 **Issue**: Multiple errors occur including IPAM connectivity issues, throttling requests, and segmentation faults:
+  `Checking for IPAM connectivity …​` 
+  `Throttling request took 1.047064274s` 
+  `Retrying waiting for IPAM-D` 
+  `panic: runtime error: invalid memory address or nil pointer dereference` 

 **Solution**: This issue occurs if you install `systemd-udev` on AL2023, as the file is re-written with a breaking policy. This can happen when updating to a different `releasever` that has an updated package or manually updating the package itself. Avoid installing or updating `systemd-udev` on AL2023 nodes.

### Failed to find device by name error
<a name="network-policies-troubleshooting-device-not-found"></a>

 **Issue**: Error message: `{"level":"error","ts":"2025-02-05T20:27:18.669Z","caller":"ebpf/bpf_client.go:578","msg":"failed to find device by name eni9ea69618bf0: %!w(netlink.LinkNotFoundError={0xc000115310})"}` 

 **Solution**: This issue has been identified and fixed in the latest versions of the Amazon VPC CNI network policy agent (v1.2.0). Update to the latest version of the VPC CNI to resolve this issue.

### CVE vulnerabilities in Multus CNI image
<a name="network-policies-troubleshooting-cve-multus"></a>

 **Issue**: Enhanced EKS ImageScan CVE Report identifies vulnerabilities in the Multus CNI image version v4.1.4-eksbuild.2\$1thick.

 **Solution**: Update to the new version of the Multus CNI image and the new Network Policy Controller image, which have no vulnerabilities. The scanner can be updated to address the vulnerabilities found in the previous version.

### Flow Info DENY verdicts in logs
<a name="network-policies-troubleshooting-flow-info-deny"></a>

 **Issue**: Network policy logs show DENY verdicts: `{"level":"info","ts":"2024-11-25T13:34:24.808Z","logger":"ebpf-client","caller":"events/events.go:193","msg":"Flow Info: ","Src IP":"","Src Port":9096,"Dest IP":"","Dest Port":56830,"Proto":"TCP","Verdict":"DENY"}` 

 **Solution**: This issue has been resolved in the new version of the Network Policy Controller. Update to the latest EKS platform version to resolve logging issues.

### Pod-to-pod communication issues after migrating from Calico
<a name="network-policies-troubleshooting-calico-migration"></a>

 **Issue**: After upgrading an EKS cluster to version 1.30 and switching from Calico to Amazon VPC CNI for network policy, pod-to-pod communication fails when network policies are applied. Communication is restored when network policies are deleted.

 **Solution**: The network policy agent in the VPC CNI can’t have as many ports specified as Calico does. Instead, use port ranges in the network policies. The maximum number of unique combinations of ports for each protocol in each `ingress:` or `egress:` selector in a network policy is 24. Use port ranges to reduce the number of unique ports and avoid this limitation.

### Network policy agent doesn’t support standalone pods
<a name="network-policies-troubleshooting-standalone-pods"></a>

 **Issue**: Network policies applied to standalone pods may have inconsistent behavior.

 **Solution**: The Network Policy agent currently only supports pods that are deployed as part of a deployment/replicaset. If network policies are applied to standalone pods, there might be some inconsistencies in the behavior. This is documented at the top of this page, in the [Considerations](cni-network-policy.md#cni-network-policy-considerations), and in the [aws-network-policy-agent GitHub issue \$1327](https://github.com/aws/aws-network-policy-agent/issues/327) on GitHub. Deploy pods as part of a deployment or replicaset for consistent network policy behavior.

# Stars demo of network policy for Amazon EKS
<a name="network-policy-stars-demo"></a>

This demo creates a front-end, back-end, and client service on your Amazon EKS cluster. The demo also creates a management graphical user interface that shows the available ingress and egress paths between each service. We recommend that you complete the demo on a cluster that you don’t run production workloads on.

Before you create any network policies, all services can communicate bidirectionally. After you apply the network policies, you can see that the client can only communicate with the front-end service, and the back-end only accepts traffic from the front-end.

1. Apply the front-end, back-end, client, and management user interface services:

   ```
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/namespace.yaml
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/management-ui.yaml
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/backend.yaml
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/frontend.yaml
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/client.yaml
   ```

1. View all Pods on the cluster.

   ```
   kubectl get pods -A
   ```

   An example output is as follows.

   In your output, you should see pods in the namespaces shown in the following output. The *NAMES* of your pods and the number of pods in the `READY` column are different than those in the following output. Don’t continue until you see pods with similar names and they all have `Running` in the `STATUS` column.

   ```
   NAMESPACE         NAME                                       READY   STATUS    RESTARTS   AGE
   [...]
   client            client-xlffc                               1/1     Running   0          5m19s
   [...]
   management-ui     management-ui-qrb2g                        1/1     Running   0          5m24s
   stars             backend-sz87q                              1/1     Running   0          5m23s
   stars             frontend-cscnf                             1/1     Running   0          5m21s
   [...]
   ```

1. To connect to the management user interface, connect to the `EXTERNAL-IP` of the service running on your cluster:

   ```
   kubectl get service/management-ui -n management-ui
   ```

1. Open a browser to the location from the previous step. You should see the management user interface. The **C** node is the client service, the **F** node is the front-end service, and the **B** node is the back-end service. Each node has full communication access to all other nodes, as indicated by the bold, colored lines.  
![\[Open network policy\]](http://docs.aws.amazon.com/eks/latest/userguide/images/stars-default.png)

1. Apply the following network policy in both the `stars` and `client` namespaces to isolate the services from each other:

   ```
   kind: NetworkPolicy
   apiVersion: networking.k8s.io/v1
   metadata:
     name: default-deny
   spec:
     podSelector:
       matchLabels: {}
   ```

   You can use the following commands to apply the policy to both namespaces:

   ```
   kubectl apply -n stars -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/apply_network_policies.files/default-deny.yaml
   kubectl apply -n client -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/apply_network_policies.files/default-deny.yaml
   ```

1. Refresh your browser. You see that the management user interface can no longer reach any of the nodes, so they don’t show up in the user interface.

1. Apply the following different network policies to allow the management user interface to access the services. Apply this policy to allow the UI:

   ```
   kind: NetworkPolicy
   apiVersion: networking.k8s.io/v1
   metadata:
     namespace: stars
     name: allow-ui
   spec:
     podSelector:
       matchLabels: {}
     ingress:
       - from:
           - namespaceSelector:
               matchLabels:
                 role: management-ui
   ```

   Apply this policy to allow the client:

   ```
   kind: NetworkPolicy
   apiVersion: networking.k8s.io/v1
   metadata:
     namespace: client
     name: allow-ui
   spec:
     podSelector:
       matchLabels: {}
     ingress:
       - from:
           - namespaceSelector:
               matchLabels:
                 role: management-ui
   ```

   You can use the following commands to apply both policies:

   ```
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/apply_network_policies.files/allow-ui.yaml
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/apply_network_policies.files/allow-ui-client.yaml
   ```

1. Refresh your browser. You see that the management user interface can reach the nodes again, but the nodes cannot communicate with each other.  
![\[UI access network policy\]](http://docs.aws.amazon.com/eks/latest/userguide/images/stars-no-traffic.png)

1. Apply the following network policy to allow traffic from the front-end service to the back-end service:

   ```
   kind: NetworkPolicy
   apiVersion: networking.k8s.io/v1
   metadata:
     namespace: stars
     name: backend-policy
   spec:
     podSelector:
       matchLabels:
         role: backend
     ingress:
       - from:
           - podSelector:
               matchLabels:
                 role: frontend
         ports:
           - protocol: TCP
             port: 6379
   ```

1. Refresh your browser. You see that the front-end can communicate with the back-end.  
![\[Front-end to back-end policy\]](http://docs.aws.amazon.com/eks/latest/userguide/images/stars-front-end-back-end.png)

1. Apply the following network policy to allow traffic from the client to the front-end service:

   ```
   kind: NetworkPolicy
   apiVersion: networking.k8s.io/v1
   metadata:
     namespace: stars
     name: frontend-policy
   spec:
     podSelector:
       matchLabels:
         role: frontend
     ingress:
       - from:
           - namespaceSelector:
               matchLabels:
                 role: client
         ports:
           - protocol: TCP
             port: 80
   ```

1. Refresh your browser. You see that the client can communicate to the front-end service. The front-end service can still communicate to the back-end service.  
![\[Final network policy\]](http://docs.aws.amazon.com/eks/latest/userguide/images/stars-final.png)

1. (Optional) When you are done with the demo, you can delete its resources.

   ```
   kubectl delete -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/client.yaml
   kubectl delete -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/frontend.yaml
   kubectl delete -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/backend.yaml
   kubectl delete -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/management-ui.yaml
   kubectl delete -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/namespace.yaml
   ```

   Even after deleting the resources, there can still be network policy endpoints on the nodes that might interfere in unexpected ways with networking in your cluster. The only sure way to remove these rules is to reboot the nodes or terminate all of the nodes and recycle them. To terminate all nodes, either set the Auto Scaling Group desired count to 0, then back up to the desired number, or just terminate the nodes.