Kubernetes Prerequisites and Permissions

You must ensure that certain prerequisites and permissions are configured before implementing Aviatrix Kubernetes Firewall.

Aviatrix Kubernetes Firewall is currently only available for AWS and Azure.

General Prerequisites

  • The Kubernetes API server must be accessible via a public IP address from the Internet. You can restrict access so that only the IP address of the Aviatrix Controller can connect to the API server. For more information, see Configure endpoint access - AWS console in the AWS documentation.

  • Aviatrix recommends that your Kubernetes clusters use a flat network model (this is the EKS and AKS default).

    If your clusters are in an overlay network, only SmartGroups from Kubernetes Services can be used, and they must only be used as the Destination Group of Distributed Cloud Firewall (DCF) rules. In this case, DCF will only control connections to the Load Balancers created for Kubernetes Load Balancer services. For more information, see Services, Load Balancing, and Networking in the Kubernetes documentation.
  • SNAT must be disabled for the worker nodes (VMs) in your Kubernetes clusters regardless of the cloud provider you are using. For more information, see Enable outbound internet access for Pods.

Kubeconfig File

Here is an example of a properly configured kubeconfig file:

apiVersion: v1
kind: Config
clusters:
  - cluster:
      certificate-authority-data: LS0tLS1CRUdJTiBDRVJ...Cg==
      server: https://127.0.0.1:64774
    name: kind-cluster-1-51992
contexts:
  - context:
      cluster: kind-cluster-1-51992
      user: kind-cluster-1-51992
    name: kind-cluster-1-51992
current-context: kind-cluster-1-51992
preferences: {}
users:
  - name: kind-cluster-1-51992
    user:
      client-certificate-data: LS0tLS1...S0tCg==
      client-key-data: LS0tLS1C...S0tCg==

AWS

  • Enable the necessary IAM permissions to discover the Kubernetes clusters:

     {
    "Effect": "Allow",
    "Action": [
    "eks:ListClusters",
    "eks:DescribeCluster"
    ]
    }
  • Ensure that your AWS cloud account has the following permissions for discovering EKS clusters (added to the App role in your AWS console):

    • eks:ListClusters

    • eks:DescribeCluster

    • elasticloadbalancing:DescribeLoadBalancers (should already be included in the base set of permissions)

    • elasticloadbalancing:DescribeTags (should already be included in the base set of permissions)

To grant the necessary permissions to the AWS role for EKS clusters, follow the instructions in the AWS EKS User Guide.

Terraform

To onboard Kubernetes clusters using a Terraform script, you must have Terraform installed on your local machine with both AWS and Kubernetes set up as providers.

To set up Terraform with AWS and Kubernetes providers:

  1. Install and configure AWS CLI (obtain access key ID and AWS secret access key; copy access key and paste into 'aws configure' in AWS CLI). The tool can then be used with eksctl and Terraform.

  2. Install Terraform (if not already installed).

  3. Install kubectl.

  4. Create files in your Terraform project directory: main.tf, variables.tf, outputs.tf, providers.tf.

  5. Go to Registry in Terraform and find the AWS and Kubernetes providers.

  6. Click on each provider and then click USE PROVIDER to copy the installation code.

  7. Paste the installation code for AWS and Kubernetes into providers.tf. This tells Terraform what providers to use from the Registry.

  8. Specify the provider configuration for AWS and Kubernetes (such as region for AWS).

eksctl and kubectl

You need these tools for the Terraform and the Command Line authentication methods.

Azure

Azure requires the following permissions to connect to and discover AKS clusters. See Apply Azure Role-Based Access Control (RBAC) to an Aviatrix Azure Account and Aviatrix Required Custom Role Permissions for examples, and where to add these permissions:

  • Microsoft.ContainerService/managedClusters/read

  • Microsoft.ContainerService/managedClusters/listClusterUserCredential/action

Network Load Balancers can be used but only if they are in private subnets. You are responsible for the authorization of network Load Balancers and application load balancers in public subnets.

AKS clusters must use Kubernetes RBAC instead of any integration with Entra. See https://learn.microsoft.com/en-us/azure/aks/concepts-identity#kubernetes-rbac for more information.

Limitations

The Aviatrix Kubernetes solution will initially not work with private clusters.

Clusters that do not meet the above prerequisites will show as greyed out if you attempt to select them as a Resource Type when creating a SmartGroup.

PRD says solution will initially not work with private clusters, which represent the majority of production clusters created today. This limitation should be removed in a future release. One possible solution would involve performing the synchronization of K8s pod IPs in the local gateway rather than the controller. This would allow access to private clusters, and may alleviate some performance concerns as well.

For clusters that do not meet these requirements, the K8s SmartGroup Creation UI will display the cluster greyed out, with a UI prompt explaining why the cluster cannot be used for SmartGroups.

Also from PRD: One possible solution would involve performing the synchronization of K8s pod IPs in the local gateway rather than the controller. This would allow access to private clusters, and may alleviate some performance concerns as well.

For clusters that do not meet these requirements, the K8s SmartGroup Creation UI will display the cluster greyed out, with a UI prompt explaining why the cluster cannot be used for SmartGroups.

From Cyrus:

Get the cluster information from the EKS API. This would be the permission eks:ListClusters and eks:DescribeClusters (where do you do this? In AWS?)

Authenticate to the cluster API servers.

Is it worthwhile to mention any of the following:

  • SNAT must be disabled for filtering traffic at the Spoke gateways (assuming this is relevant to DCF rules that include Kubernetes SmartGroups and VPCs?)

  • Network Load Balancers can be used but only if they are in private subnets.

  • User is responsible for authorization of network Load Balancers in public subnets. Same for application load balancers.