- Establishing private connectivity — setting up a network path between the Controller and the private cluster.
- Onboarding the private cluster — registering it with the Controller once connectivity is in place.
All prerequisites from the main onboarding guide apply to private clusters. Review them before proceeding.
Prerequisites
In addition to the standard prerequisites:- The Aviatrix Controller must have private network reachability to the Kubernetes API server.
- The cluster’s API server security group (or equivalent firewall rules) must allow inbound HTTPS (port 443) from the Controller’s private IP address.
- You must have kubectl access to the private cluster to apply RBAC configurations.
- Terraform installed if you plan to use Infrastructure as Code for configuration.
Establishing Private Connectivity
The Controller needs a network path to the private Kubernetes API server. Choose the method that fits your environment.Option A: Aviatrix Transit (Recommended)
This approach uses Aviatrix Transit and Spoke Gateways to connect the Controller’s VPC to the cluster’s VPC over the Aviatrix backbone.Step 1: Create a Spoke Gateway in the Controller VPC
Deploy an Aviatrix Spoke Gateway in the VPC where the Aviatrix Controller runs. If a Spoke Gateway already exists in this VPC, skip this step. The Controller runs in a public subnet, but the Spoke Gateway enables private-address connectivity through the Aviatrix Transit backbone to other spokes — including the VPC that hosts the private Kubernetes cluster.Step 2: Attach Both VPCs to an Aviatrix Transit
Attach the Controller VPC spoke and the cluster VPC spoke to the same Aviatrix Transit so they can communicate over the Aviatrix backbone.Step 3: Create a DCF Policy for Controller-to-Cluster Traffic
Create a DCF policy that permits the Controller to connect to the Kubernetes control plane:Other Connectivity Options
Any method that provides private reachability from the Controller to the K8s API server works:- VPC Peering — Direct peering between the Controller VPC and the cluster VPC.
- AWS PrivateLink — Create a VPC endpoint for the EKS API server.
- VPN — Site-to-site VPN connecting the Controller’s network to the cluster’s network.
- AWS Transit Gateway — Native AWS Transit Gateway connecting both VPCs.
Configuring Security Groups
By default, managed Kubernetes services (such as EKS) restrict API server access to worker nodes only. Even with private connectivity at the network layer, you must add an inbound rule that allows the Controller to reach the API server. For example, when using theterraform-aws-modules/eks/aws module:
<CONTROLLER_PRIVATE_IP> with the private IP address of your Aviatrix Controller instance.
Onboarding the Private Cluster
Once connectivity is established and security groups are configured, onboard the cluster.Using CSP Credentials (Recommended)
For Controller version 8.2 and later, private clusters can be onboarded using CSP credentials — the same method used for public clusters. The networking and security group prerequisites still apply, but you can skip the service account and kubeconfig steps.Using a Kubeconfig (Pre-8.2 or Non-Managed Clusters)
For Controller versions before 8.2, or for clusters where CSP credentials are not available, create a service account and kubeconfig manually.Step 1: Create a Service Account
Apply the following manifest to create a ServiceAccount with the required RBAC permissions:Step 2: Create a Kubeconfig
Build a kubeconfig file for the private cluster. Replace the placeholder values with your cluster-specific details.| Placeholder | How to Obtain |
|---|---|
<TOKEN> | Extracted in Step 1 above |
<CA_DATA> | aws eks describe-cluster --name CLUSTER_NAME --query cluster.certificateAuthority.data --output text |
<ENDPOINT> | aws eks describe-cluster --name CLUSTER_NAME --query cluster.endpoint --output text |
Step 3: Register the Cluster
Use a custom
cluster_id that differs from the cluster’s ARN. If the Controller discovers the cluster via cloud APIs, both the discovered entry and the manually configured entry will appear in CoPilot. The manually configured cluster is the active one.The cluster may display as PUBLIC in CoPilot even though it is a private cluster. This is expected behavior.
Verification
After onboarding, verify the connection status:RUNNING within approximately 30 seconds.
You can also check the status on the Cloud Resources > Cloud Assets > Kubernetes Clusters tab in CoPilot. See Verifying Onboarding Status for the status definitions.
Next Steps
- Create Kubernetes SmartGroups to reference workloads in the private cluster.
- Create DCF policy rules to control traffic to and from those workloads.