Private Kubernetes clusters are clusters where the API server is not exposed to
the public internet and only has a private IP address. Because the Aviatrix
Controller cannot reach a private API server over the internet, the Controller
must have private network connectivity to the cluster’s API server before you
can onboard it.
How you establish that connectivity is up to you — VPC peering, a VPN, AWS
PrivateLink, or an Aviatrix Transit architecture all work. The only requirement
is that the Aviatrix Controller can reach the Kubernetes API server endpoint
over a private path.
This guide covers:
- Onboarding a private cluster — the core workflow, assuming private
connectivity already exists.
- Establishing private connectivity with Aviatrix — one example approach
using Aviatrix Transit and Spoke Gateways.
Part 1: Onboarding a Private Cluster
This section assumes the Aviatrix Controller already has network reachability to
the Kubernetes API server over a private path. If it does not, see
Part 2 for
one way to set that up.
Prerequisites
Before you begin, ensure you have the following:
- An Aviatrix Controller with private network connectivity to the Kubernetes API
server.
- The cluster’s API server security group (or equivalent firewall rules) allows
inbound traffic from the Controller’s private IP.
kubectl access to the private Kubernetes cluster.
- Terraform installed if you plan to use Infrastructure as Code for
configuration.
- The AWS CLI (or equivalent CSP CLI) configured with permissions to describe
your cluster.
Step 1: Allow the Controller Through the Cluster Security Group
By default, managed Kubernetes services (such as EKS) restrict API server access
to worker nodes only. Even if private connectivity exists at the network layer,
you must add an inbound rule that allows the Aviatrix Controller to reach the
API server.
For example, when using the terraform-aws-modules/eks/aws module:
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"
# ...
cluster_security_group_additional_rules = {
avx_controller = {
cidr_blocks = ["<CONTROLLER_PRIVATE_IP>/32"]
description = "Allow traffic from Aviatrix Controller"
from_port = 0
to_port = 0
protocol = "all"
type = "ingress"
}
}
}
Replace <CONTROLLER_PRIVATE_IP> with the private IP address of your Aviatrix
Controller instance.
Step 2: Create a Service Account
Create a ServiceAccount in the private Kubernetes cluster that the Aviatrix
Controller will use for workload discovery and policy enforcement.
Apply the following manifest:
apiVersion: v1
kind: ServiceAccount
metadata:
name: avx-controller
namespace: kube-system
---
apiVersion: v1
kind: Secret
metadata:
name: avx-controller
namespace: kube-system
annotations:
kubernetes.io/service-account.name: avx-controller
type: kubernetes.io/service-account-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: avx-controller
rules:
- verbs: ["get", "list", "watch"]
apiGroups: [""]
resources: ["pods", "services", "namespaces", "nodes"]
- verbs: ["get", "list", "watch"]
apiGroups: ["discovery.k8s.io"]
resources: ["endpointslices"]
- verbs: ["*"]
apiGroups: ["networking.aviatrix.com"]
resources: ["*"]
- verbs: ["create", "patch"]
apiGroups: ["events.k8s.io"]
resources: ["events"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: avx-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: avx-controller
subjects:
- kind: ServiceAccount
name: avx-controller
namespace: kube-system
After applying the manifest, extract the authentication token:
TOKEN=$(kubectl get secret -n kube-system avx-controller -o jsonpath='{.data.token}' | base64 -d)
Step 3: Create a Kubeconfig
Build a kubeconfig file for the private cluster. Replace the placeholder values
with your cluster-specific details.
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: <CA_DATA>
server: <ENDPOINT>
name: private-cluster
contexts:
- context:
cluster: private-cluster
user: private-cluster
name: private-cluster
current-context: private-cluster
preferences: {}
users:
- name: private-cluster
user:
token: <TOKEN>
Retrieve the placeholder values as follows:
| Placeholder | How to Obtain |
|---|
<TOKEN> | Extracted in Step 2 above. |
<CA_DATA> | aws eks describe-cluster --name <CLUSTER_NAME> --query cluster.certificateAuthority.data --output text |
<ENDPOINT> | aws eks describe-cluster --name <CLUSTER_NAME> --query cluster.endpoint --output text |
Step 4: Onboard the Private Cluster
Controller 8.2+: Private clusters can be onboarded using CSP credentials, just like public clusters. The networking and security group prerequisites still apply, but you can skip the ServiceAccount and kubeconfig steps above.resource "aviatrix_kubernetes_cluster" "eks_cluster" {
cluster_id = data.aws_eks_cluster.eks_cluster.arn
use_csp_credentials = true
}
For Controller versions before 8.2, onboard the cluster manually using the
kubeconfig you created. The Controller will not automatically discover private
clusters, so you must register it as a new cluster — even if a discovered entry
already appears.
data "local_file" "kubeconfig" {
filename = "./private_kubeconfig.yaml"
}
data "aviatrix_account" "aws_account" {
account_name = var.aviatrix_aws_access_account
}
resource "aviatrix_kubernetes_cluster" "my_private_cluster" {
# Use a custom ID — this must differ from the ARN of the discovered cluster.
cluster_id = "my-private-cluster-id"
kube_config = data.local_file.kubeconfig.content
cluster_details {
account_name = data.aviatrix_account.aws_account.account_name
account_id = data.aviatrix_account.aws_account.aws_account_number
name = "my_private_cluster"
region = "<REGION>"
vpc_id = "<CLUSTER_VPC_ID>"
is_publicly_accessible = true
platform = "EKS"
version = "1.30"
network_mode = "FLAT"
}
}
After onboarding, both the discovered cluster and the manually configured
cluster may appear in CoPilot. The manually configured cluster is the active
one. The cluster may also display as PUBLIC in CoPilot even though it is a
private cluster — this is expected behavior.
You can verify the connection status from the Controller:
kubectl exec -ti deploy/cloudxd -- asset-cli status k8s
The manually onboarded cluster should show a status of RUNNING within
approximately 30 seconds.
With the private cluster onboarded, you can create SmartGroups that reference
workloads in the cluster. Use the custom cluster ID (not the discovered cluster
ARN) when defining SmartGroups:
resource "aviatrix_smart_group" "k8s_nodes" {
name = "k8s-nodes"
selector {
match_expressions {
type = "k8s_node"
k8s_cluster_id = "my-private-cluster-id"
}
}
}
resource "aviatrix_smart_group" "k8s_pods" {
name = "k8s-pods"
selector {
match_expressions {
type = "k8s"
k8s_cluster_id = "my-private-cluster-id"
k8s_namespace = "my-namespace"
}
}
}
You can then use these SmartGroups in DCF policy rules to control traffic to and
from workloads running in the private cluster.
Part 2: Establishing Private Connectivity with Aviatrix Transit
If you do not already have private connectivity between the Aviatrix Controller
and the Kubernetes cluster, one approach is to use Aviatrix Transit and Spoke
Gateways. This section walks through that setup.
This is just one option. Any method that gives the Controller private
reachability to the Kubernetes API server will work — VPC peering, AWS Transit
Gateway, VPN, PrivateLink, and so on.
Create a Spoke Gateway in the Controller VPC
Deploy an Aviatrix Spoke Gateway in the VPC where the Aviatrix Controller runs.
If a Spoke Gateway already exists in this VPC, skip this step.
The Controller itself runs in a public subnet, but the Spoke Gateway enables
private-address connectivity through the Aviatrix Transit backbone to other
spokes — including the VPC that hosts the private Kubernetes cluster.
Attach Both VPCs to an Aviatrix Transit
Attach the controller VPC spoke and the cluster VPC spoke to the same Aviatrix
Transit so they can communicate over the Aviatrix backbone.
Create a DCF Policy for Controller-to-Cluster Traffic
Create a DCF policy that permits the Controller to connect to the Kubernetes
control plane. The example below uses Terraform:
resource "aviatrix_smart_group" "controller" {
name = "controller"
selector {
match_expressions {
type = "vm"
name = "controller-instance"
}
}
}
resource "aviatrix_web_group" "private_cluster" {
name = "private-cluster"
selector {
match_expressions {
snifilter = "<CLUSTER_API_SERVER_HOSTNAME>"
}
}
}
resource "aviatrix_distributed_firewalling_policy_list" "controller_to_k8s" {
policies {
name = "k8s-controller"
priority = 3
action = "PERMIT"
protocol = "ANY"
src_smart_groups = [
aviatrix_smart_group.controller.uuid,
]
dst_smart_groups = ["def000ad-0000-0000-0000-000000000000"]
web_groups = [
aviatrix_web_group.private_cluster.uuid,
]
}
}
You can obtain the API server hostname from the cluster endpoint URL. For EKS:aws eks describe-cluster --name <CLUSTER_NAME> --query cluster.endpoint --output text
The hostname is the portion after https://.
Once Transit connectivity is established and the DCF policy is in place, return
to Part 1 to complete the onboarding.