Skip to main content
Private Kubernetes clusters have an API server that is not exposed to the public internet. Because the Aviatrix Controller must reach the Kubernetes API server to discover resources, it needs a private network path to the cluster before you can onboard it. How you establish that connectivity is up to you — VPC peering, a VPN, AWS PrivateLink, or an Aviatrix Transit architecture all work. The only requirement is that the Controller can reach the Kubernetes API server endpoint over a private path. This guide covers:
  1. Establishing private connectivity — setting up a network path between the Controller and the private cluster.
  2. Onboarding the private cluster — registering it with the Controller once connectivity is in place.
All prerequisites from the main onboarding guide apply to private clusters. Review them before proceeding.

Prerequisites

In addition to the standard prerequisites:
  • The Aviatrix Controller must have private network reachability to the Kubernetes API server.
  • The cluster’s API server security group (or equivalent firewall rules) must allow inbound HTTPS (port 443) from the Controller’s private IP address.
  • You must have kubectl access to the private cluster to apply RBAC configurations.
  • Terraform installed if you plan to use Infrastructure as Code for configuration.

Establishing Private Connectivity

The Controller needs a network path to the private Kubernetes API server. Choose the method that fits your environment. This approach uses Aviatrix Transit and Spoke Gateways to connect the Controller’s VPC to the cluster’s VPC over the Aviatrix backbone.
This is one option among many. Any method that gives the Controller private reachability to the Kubernetes API server works — VPC peering, AWS Transit Gateway, VPN, PrivateLink, and so on.

Step 1: Create a Spoke Gateway in the Controller VPC

Deploy an Aviatrix Spoke Gateway in the VPC where the Aviatrix Controller runs. If a Spoke Gateway already exists in this VPC, skip this step. The Controller runs in a public subnet, but the Spoke Gateway enables private-address connectivity through the Aviatrix Transit backbone to other spokes — including the VPC that hosts the private Kubernetes cluster.

Step 2: Attach Both VPCs to an Aviatrix Transit

Attach the Controller VPC spoke and the cluster VPC spoke to the same Aviatrix Transit so they can communicate over the Aviatrix backbone.

Step 3: Create a DCF Policy for Controller-to-Cluster Traffic

Create a DCF policy that permits the Controller to connect to the Kubernetes control plane:
resource "aviatrix_smart_group" "controller" {
  name = "controller"
  selector {
    match_expressions {
      type = "vm"
      name = "controller-instance"
    }
  }
}

resource "aviatrix_web_group" "private_cluster" {
  name = "private-cluster"
  selector {
    match_expressions {
      snifilter = "<CLUSTER_API_SERVER_HOSTNAME>"
    }
  }
}

resource "aviatrix_distributed_firewalling_policy_list" "controller_to_k8s" {
  policies {
    name     = "k8s-controller"
    priority = 3
    action   = "PERMIT"
    protocol = "ANY"
    src_smart_groups = [
      aviatrix_smart_group.controller.uuid,
    ]
    dst_smart_groups = ["def000ad-0000-0000-0000-000000000000"]
    web_groups = [
      aviatrix_web_group.private_cluster.uuid,
    ]
  }
}
You can obtain the API server hostname from the cluster endpoint URL. For EKS:
aws eks describe-cluster --name CLUSTER_NAME --query cluster.endpoint --output text
The hostname is the portion after https://.

Other Connectivity Options

Any method that provides private reachability from the Controller to the K8s API server works:
  • VPC Peering — Direct peering between the Controller VPC and the cluster VPC.
  • AWS PrivateLink — Create a VPC endpoint for the EKS API server.
  • VPN — Site-to-site VPN connecting the Controller’s network to the cluster’s network.
  • AWS Transit Gateway — Native AWS Transit Gateway connecting both VPCs.

Configuring Security Groups

By default, managed Kubernetes services (such as EKS) restrict API server access to worker nodes only. Even with private connectivity at the network layer, you must add an inbound rule that allows the Controller to reach the API server. For example, when using the terraform-aws-modules/eks/aws module:
module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 20.0"
  # ...

  cluster_security_group_additional_rules = {
    avx_controller = {
      cidr_blocks = ["<CONTROLLER_PRIVATE_IP>/32"]
      description = "Allow traffic from Aviatrix Controller"
      from_port   = 443
      to_port     = 443
      protocol    = "tcp"
      type        = "ingress"
    }
  }
}
Replace <CONTROLLER_PRIVATE_IP> with the private IP address of your Aviatrix Controller instance.

Onboarding the Private Cluster

Once connectivity is established and security groups are configured, onboard the cluster. For Controller version 8.2 and later, private clusters can be onboarded using CSP credentials — the same method used for public clusters. The networking and security group prerequisites still apply, but you can skip the service account and kubeconfig steps.
resource "aviatrix_kubernetes_cluster" "private_eks" {
  cluster_id          = data.aws_eks_cluster.private_cluster.arn
  use_csp_credentials = true
}
You still need to grant Controller access (EKS Access Entry + RBAC) as described in the main onboarding guide.

Using a Kubeconfig (Pre-8.2 or Non-Managed Clusters)

For Controller versions before 8.2, or for clusters where CSP credentials are not available, create a service account and kubeconfig manually.

Step 1: Create a Service Account

Apply the following manifest to create a ServiceAccount with the required RBAC permissions:
apiVersion: v1
kind: ServiceAccount
metadata:
  name: avx-controller
  namespace: kube-system
---
apiVersion: v1
kind: Secret
metadata:
  name: avx-controller
  namespace: kube-system
  annotations:
    kubernetes.io/service-account.name: avx-controller
type: kubernetes.io/service-account-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: avx-controller
rules:
  - verbs: ["get", "list", "watch"]
    apiGroups: [""]
    resources: ["pods", "services", "namespaces", "nodes"]
  - verbs: ["get", "list", "watch"]
    apiGroups: ["discovery.k8s.io"]
    resources: ["endpointslices"]
  # Aviatrix CRD permissions — required for DCF policy enforcement via CRDs.
  # These write permissions apply only to Aviatrix-specific resources, not core K8s resources.
  - verbs: ["*"]
    apiGroups: ["networking.aviatrix.com"]
    resources: ["*"]
  - verbs: ["create", "patch"]
    apiGroups: ["events.k8s.io"]
    resources: ["events"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: avx-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: avx-controller
subjects:
  - kind: ServiceAccount
    name: avx-controller
    namespace: kube-system
After applying the manifest, extract the authentication token:
TOKEN=$(kubectl get secret -n kube-system avx-controller -o jsonpath='{.data.token}' | base64 -d)

Step 2: Create a Kubeconfig

Build a kubeconfig file for the private cluster. Replace the placeholder values with your cluster-specific details.
apiVersion: v1
kind: Config
clusters:
  - cluster:
      certificate-authority-data: <CA_DATA>
      server: <ENDPOINT>
    name: private-cluster
contexts:
  - context:
      cluster: private-cluster
      user: private-cluster
    name: private-cluster
current-context: private-cluster
preferences: {}
users:
  - name: private-cluster
    user:
      token: <TOKEN>
Retrieve the placeholder values:
PlaceholderHow to Obtain
<TOKEN>Extracted in Step 1 above
<CA_DATA>aws eks describe-cluster --name CLUSTER_NAME --query cluster.certificateAuthority.data --output text
<ENDPOINT>aws eks describe-cluster --name CLUSTER_NAME --query cluster.endpoint --output text

Step 3: Register the Cluster

data "local_file" "kubeconfig" {
  filename = "./private_kubeconfig.yaml"
}

data "aviatrix_account" "aws_account" {
  account_name = var.aviatrix_aws_access_account
}

resource "aviatrix_kubernetes_cluster" "private_cluster" {
  cluster_id  = "my-private-cluster-id"
  kube_config = data.local_file.kubeconfig.content

  cluster_details {
    account_name           = data.aviatrix_account.aws_account.account_name
    account_id             = data.aviatrix_account.aws_account.aws_account_number
    name                   = "my-private-cluster"
    region                 = "us-east-2"
    vpc_id                 = "vpc-abc123"
    is_publicly_accessible = false
    platform               = "EKS"
    version                = "1.30"
    network_mode           = "FLAT"
  }
}
Use a custom cluster_id that differs from the cluster’s ARN. If the Controller discovers the cluster via cloud APIs, both the discovered entry and the manually configured entry will appear in CoPilot. The manually configured cluster is the active one.
The cluster may display as PUBLIC in CoPilot even though it is a private cluster. This is expected behavior.

Verification

After onboarding, verify the connection status:
kubectl exec -ti deploy/cloudxd -- asset-cli status k8s
The cluster should show a status of RUNNING within approximately 30 seconds. You can also check the status on the Cloud Resources > Cloud Assets > Kubernetes Clusters tab in CoPilot. See Verifying Onboarding Status for the status definitions.

Next Steps