Kubernetes Prerequisites and Permissions
You must ensure that certain prerequisites and permissions are configured before implementing the Aviatrix Kubernetes solution.
The Aviatrix Kubernetes solution is currently only available for AWS and Azure. |
General Prerequisites
-
The Kubernetes API server must be accessible via a public IP address from the Internet. You can restrict access so that only the IP address of the Aviatrix Controller can connect to the API server. For more information, see Configure endpoint access - AWS console in the AWS documentation.
-
Aviatrix recommends that your Kubernetes clusters use a flat network model (this is the EKS and AKS default).
If your clusters are in an overlay network, only SmartGroups from Kubernetes Services can be used, and they must only be used as the Destination Group of Distributed Cloud Firewall (DCF) rules. In this case, DCF will only control connections to the Load Balancers created for Kubernetes Load Balancer services. For more information, see Services, Load Balancing, and Networking in the Kubernetes documentation. -
SNAT must be disabled for the worker nodes (VMs) in your Kubernetes clusters regardless of the cloud provider you are using. For more information, see Enable outbound internet access for Pods.
Kubeconfig File
-
The kubeconfig file must have the necessary permissions before onboarding a cluster manually or via authentication method.
-
Ensure that the account represented by the kubeconfig file has the permissions GET, LIST, and WATCH for the namespaces, services, pods, discovery.k8s.io/endpointslices, and nodes configured in their ClusterRole. A ClusterRole contains rules that represent a set of permissions. See https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole for more information.
-
The kubeconfig file cannot use the exec configuration. See https://kubernetes.io/docs/reference/config-api/kubeconfig.v1/#ExecConfig for more information.
-
Ensure that the kubeconfig file does not allow for creating, editing, or deleting resources.
Here is an example of a properly configured kubeconfig file:
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJ...Cg==
server: https://127.0.0.1:64774
name: kind-cluster-1-51992
contexts:
- context:
cluster: kind-cluster-1-51992
user: kind-cluster-1-51992
name: kind-cluster-1-51992
current-context: kind-cluster-1-51992
preferences: {}
users:
- name: kind-cluster-1-51992
user:
client-certificate-data: LS0tLS1...S0tCg==
client-key-data: LS0tLS1C...S0tCg==
AWS
-
Enable the necessary IAM permissions.
-
Ensure that your AWS cloud account has the following permissions for discovering EKS clusters (added to the App role in your AWS console):
-
eks:ListClusters
-
eks:DescribeCluster
-
elasticloadbalancing:DescribeLoadBalancers (should already be included in the base set of permissions)
-
elasticloadbalancing:DescribeTags (should already be included in the base set of permissions)
-
To grant the necessary permissions to the AWS role for EKS clusters, follow the instructions in the AWS EKS User Guide. |
Terraform
To onboard Kubernetes clusters using a Terraform script, you must have Terraform installed on your local machine with both AWS and Kubernetes set up as providers.
For more information see Terraform Provider Overview and Terraform Provider Requirements.
To set up Terraform with AWS and Kubernetes providers:
-
Install and configure AWS CLI (obtain access key ID and AWS secret access key; copy access key and paste into 'aws configure' in AWS CLI). The tool can then be used with eksctl and Terraform.
-
Install Terraform (if not already installed).
-
Install kubectl.
-
Create files in your Terraform project directory: main.tf, variables.tf, outputs.tf, providers.tf.
-
Go to Registry in Terraform and find the AWS and Kubernetes providers.
-
Click on each provider and then click USE PROVIDER to copy the installation code.
-
Paste the installation code for AWS and Kubernetes into providers.tf. This tells Terraform what providers to use from the Registry.
-
Specify the provider configuration for AWS and Kubernetes (such as region for AWS).
See Onboarding via Authentication Method for steps.
eksctl and kubectl
You need these tools for the Terraform and the Command Line authentication methods.
-
eksctl: eksctl Installation
-
kubectl: Kubernetes kubectl Installation
See Onboarding via Authentication Method for steps.
Azure
Azure requires the following permissions to connect to and discover AKS clusters. See Apply Azure Role-Based Access Control (RBAC) to an Aviatrix Azure Account and Aviatrix Required Custom Role Permissions for examples, and where to add these permissions:
-
Microsoft.ContainerService/managedClusters/read
-
Microsoft.ContainerService/managedClusters/listClusterUserCredential/action
Network Load Balancers can be used but only if they are in private subnets. You are responsible for the authorization of network Load Balancers and application load balancers in public subnets.
AKS clusters must use Kubernetes RBAC instead of any integration with Entra. See https://learn.microsoft.com/en-us/azure/aks/concepts-identity#kubernetes-rbac for more information.
Limitations
The Aviatrix Kubernetes solution will initially not work with private clusters.
Clusters that do not meet the above prerequisites will show as greyed out if you attempt to select them as a Resource Type when creating a SmartGroup.