Overview
Onboarding a Kubernetes cluster creates a cluster configuration on the Aviatrix Controller that grants read-only access to discover Kubernetes namespaces, services, pods, endpoint slices, and nodes. This data powers SmartGroup-based security policies in the Distributed Cloud Firewall (DCF), letting you write firewall rules that reference Kubernetes workloads by label, namespace, or service name. Two things must happen before DCF can enforce policy on a cluster:- Grant Controller access to the cluster’s Kubernetes API — the Controller must be able to reach the API server and authenticate with sufficient RBAC permissions.
- Register the cluster — tell the Controller which cluster to watch by completing the onboarding workflow in the UI or via Terraform.
Onboarding Paths
| Path | When to Use | Supported Clouds |
|---|---|---|
| Auto-discovered + CSP credentials | Managed clusters (EKS/AKS) visible on the Kubernetes Clusters tab | AWS, Azure |
| Auto-discovered + kubeconfig | Managed clusters where the CSP credential path is not viable | AWS, Azure |
| Manual onboarding | Self-managed, private, or undiscovered clusters | AWS, Azure, GCP |
| Terraform (full automation) | CI/CD, IaC, multi-cluster at scale | All |
The Kubernetes Clusters tab is only visible when DCF is enabled and Kubernetes Resource Discovery is turned on under Groups > Settings. See Kubernetes Resource Discovery for setup instructions.
Prerequisites
Platform Requirements
Before onboarding any cluster, confirm the following:- DCF is enabled. See Enable Distributed Cloud Firewall.
- Kubernetes Resource Discovery is enabled under Groups > Settings. See Kubernetes Resource Discovery.
- The Kubernetes API server must be network-reachable from the Controller.
- Public clusters: restrict API server access to the Controller’s IP address for security.
- Private clusters: a private network path is required. See Onboarding Private Kubernetes Clusters.
- SNAT must be disabled on worker nodes so that pod source IPs are preserved for policy enforcement. See the AWS documentation on external SNAT.
- Flat networking is recommended (this is the default for EKS and AKS). Overlay networks have limited SmartGroup support.
When a cluster uses an overlay network, only Service-type SmartGroups are supported. These SmartGroups can only be used as the destination in DCF rules, and they only control connections through the Load Balancer.
IAM Permissions
The Controller’s cloud account must include the following permissions for cluster discovery and node-level enforcement.- AWS
- Azure
- GCP
Discovery
eks:ListClusterseks:DescribeClusterelasticloadbalancing:DescribeLoadBalancerselasticloadbalancing:DescribeTags
ec2:DescribeSecurityGroupsec2:DescribeInstances
Kubernetes RBAC
The AWS-managedAmazonEKSViewPolicy grants read access to most Kubernetes resources but does not include nodes. A supplemental ClusterRole is required so the Controller can discover node metadata for SmartGroup enforcement.
Create a file called cluster-role.yaml with the following content:
Kubeconfig Requirements
When onboarding via kubeconfig file, the file must meet the following requirements:- Allowed verbs: GET, LIST, and WATCH on namespaces, services, pods, nodes, and endpointslices.
- No exec-based authentication — the kubeconfig must not rely on
execplugins (such asaws eks get-token) because the Controller cannot execute external binaries. - No create, update, or delete permissions — the Controller only needs read access.
- Exactly one cluster, user, and context — multi-context kubeconfig files are not supported.
- Inline credential data only — use
certificate-authority-data,client-certificate-data, andclient-key-data(base64-encoded) rather than file path references (certificate-authority,client-certificate,client-key,tokenFile). The Controller cannot read files from the local filesystem.
Example kubeconfig
Example kubeconfig
Understanding the Access Model
Each cloud provider has its own authentication and authorization layers between the Aviatrix Controller and the Kubernetes API server. Before configuring access, it helps to understand what each layer does and why it is required.- AWS
- Azure
- GCP
EKS access requires two layers working together:
The Controller authenticates to EKS by requesting a short-lived STS token on each API call. There is no long-lived credential to rotate.Once both layers are in place, register the cluster so the Controller begins discovering Kubernetes resources.What is the Controller’s IAM role ARN?The Controller authenticates to AWS using the This is the same IAM role the Controller uses for all AWS API operations (launching gateways, managing route tables, and so on). You can find the role ARN in two places:
| Layer | Resource | What It Provides |
|---|---|---|
| AWS IAM | EKS Access Entry + AmazonEKSViewPolicy | Authenticates the Controller’s IAM role to the EKS cluster and grants read access to namespaces, services, pods, and endpoint slices |
| Kubernetes RBAC | view-nodes ClusterRole + ClusterRoleBinding | Grants read access to nodes (not covered by AmazonEKSViewPolicy) |
aviatrix-role-app IAM role from the AWS account you onboarded in CoPilot. The ARN follows this format:- CoPilot: navigate to Cloud Resources > Cloud Account and locate the AWS account. The role ARN is listed in the account details.
- AWS IAM Console: search for
aviatrix-role-appunder Roles.
Onboarding via Terraform
Terraform provides the most complete and repeatable onboarding path. A singleterraform apply can grant Controller access and register the cluster in one step.
- AWS (EKS)
- Azure (AKS)
- GCP (GKE)
EKS requires an EKS Access Entry (IAM layer) and a Kubernetes ClusterRole (RBAC layer), followed by cluster registration. The configuration below handles all three.Requires the The
aws, kubernetes, and aviatrix Terraform providers configured for the target cluster.cluster_id must be the full EKS ARN — for example, arn:aws:eks:us-east-2:123456789012:cluster/my-cluster.Custom or Self-Managed Clusters
For clusters built with kops, kubeadm, k3s, Rancher, or similar tools, provide a kubeconfig and cluster details:cluster_details argument reference
cluster_details argument reference
| Argument | Required | Description |
|---|---|---|
account_name | Yes | Aviatrix cloud account name |
account_id | Yes | Cloud account ID (AWS account number, Azure subscription ID) |
name | Yes | Display name for the cluster |
region | Yes | Cloud region |
vpc_id | Yes | VPC/VNet ID. AWS: vpc-xxx. Azure: full resource ID. |
is_publicly_accessible | Yes | Whether the K8s API server is publicly reachable |
platform | Yes | Free-form string — for example, kops, kubeadm, k3s, rancher |
version | Yes | Kubernetes version string |
network_mode | Yes | FLAT or OVERLAY |
project | No | GCP project ID |
compartment | No | OCI compartment ID |
tags | No | Key-value metadata map |
Multi-Cluster Automation
Usefor_each to onboard multiple clusters in a single apply:
Onboarding via CoPilot UI
For clusters discovered via cloud APIs, navigate to Cloud Resources > Cloud Assets > Kubernetes Clusters and click Onboard next to the cluster.- AWS (EKS)
- Azure (AKS)
The Onboard Cluster dialog offers three options:
-
Terraform — Displays generated HCL for the EKS access entry and RBAC resources. Copy the script, apply it with
terraform apply, check the confirmation box, then click Onboard. -
Command Line — Displays generated
eksctlYAML andkubectlYAML. Apply both configurations, check the confirmation box, then click Onboard. - Upload Kubeconfig — Upload a kubeconfig file that meets the kubeconfig requirements. Click Onboard.
Options 1 and 2 handle both granting access and registration in one flow. Option 3 assumes the kubeconfig already has sufficient permissions.
Manual Onboarding
Use manual onboarding when the cluster was built with kops, kubeadm, k3s, Rancher, or similar tools, or when the cluster is not discoverable via cloud APIs.The cluster must reside in a VPC/VNet in a supported cloud (AWS, Azure, or GCP). On-premises bare-metal clusters cannot be onboarded.
| Field | Description |
|---|---|
| Name | Display name for the cluster. Becomes part of the generated cluster ID. |
| Cloud | AWS, Azure, or GCP. Determines which accounts, regions, and VPCs are available. |
| Account | Cloud account where the cluster resides. Must already be onboarded in CoPilot. |
| Region | Cloud region. |
| VPC/VNet | VPC or VNet where the cluster nodes run. |
| Network Mode | Flat (recommended) or Overlay. Overlay shows a warning about limited DCF support. |
| Kubeconfig File | Upload a kubeconfig file that meets the kubeconfig requirements. |
Onboarding via CLI (AWS Only)
If you prefer CLI tools over Terraform, the following methods grant Controller access to an EKS cluster and prepare it for onboarding. After completing any of these methods, register the cluster through the CoPilot UI or Terraform.eksctl + kubectl
eksctl + kubectl
Step 1 — Create the EKS access entry. Save the following as Apply it:Step 2 — Create the Kubernetes RBAC. Apply the Tool installation:
accessentry.yaml, replacing the placeholder values:cluster-role.yaml from the Kubernetes RBAC section:AWS CLI + kubectl
AWS CLI + kubectl
Step 1 — Create the EKS access entry:Step 2 — Associate the EKS view policy:Step 3 — Create the Kubernetes RBAC for node access:Use the
cluster-role.yaml from the Kubernetes RBAC section.CloudFormation + kubectl
CloudFormation + kubectl
CloudFormation can manage the EKS access entry. Save the following as Deploy the stack:Then associate the access policy via CLI (CloudFormation may not support Finally, create the Kubernetes RBAC:
aviatrix-eks-access.yaml:AWS::EKS::AccessPolicyAssociation):| Method | Tools Required | Manages EKS Access Entry | Manages K8s RBAC | Single Command |
|---|---|---|---|---|
| Terraform | terraform + AWS/K8s providers | Yes | Yes | Yes (terraform apply) |
| eksctl + kubectl | eksctl, kubectl | Yes | Yes | No (2 commands) |
| AWS CLI + kubectl | aws CLI, kubectl | Yes | Yes | No (3 commands) |
| CloudFormation + kubectl | CF, aws CLI, kubectl | Partial (may need CLI for policy) | No | No (2-3 steps) |
Post-Onboarding
Verifying Onboarding Status
After onboarding, the cluster status on the Kubernetes Clusters tab transitions through:| Status | Icon | Meaning |
|---|---|---|
| No | Gray | Discovered but not onboarded |
| Onboarding | Orange | Controller is establishing connectivity and beginning resource discovery |
| Yes | Green | Onboarded and resources are being actively discovered |
| Fail | Red | Onboarding failed — hover for error details, click Retry |
Creating Kubernetes SmartGroups
Once the cluster status is Yes, you can create SmartGroups from Kubernetes resources. Quick creation from the Kubernetes Clusters tab:- Click Create SmartGroup on an onboarded cluster.
- Choose One SmartGroup per Namespace or One SmartGroup per Service.
- Select the namespaces or services to include.
- Click Create.
| Property | Description |
|---|---|
k8s_cluster_id | Cluster ID (ARN for EKS, resource ID for AKS, self_link for GKE) |
k8s_namespace | Namespace name |
k8s_service | Service name |
| Custom K8s node labels | For example, environment=production |
Service and Label filters cannot be combined in the same SmartGroup rule.
Editing and Offboarding
Editing a Cluster
You can only edit the name of manually onboarded clusters.Offboarding a Cluster
Offboarding removes the cluster configuration from the Controller. Resource discovery stops and SmartGroup resources are no longer updated.- CoPilot UI: Click Offboard next to the cluster and confirm.
- Terraform: Remove the
aviatrix_kubernetes_clusterresource and runterraform apply.
Offboarding does not remove EKS access entries, Kubernetes ClusterRoles, ClusterRoleBindings, or any IAM resources. Clean those up separately if they are no longer needed.
Troubleshooting
| Symptom | Cause | Resolution |
|---|---|---|
| Cluster not appearing on K8s Clusters tab | Discovery not enabled or cloud account missing permissions | Enable K8s Resource Discovery; verify cloud account has required IAM permissions |
| Onboarding fails with “invalid content” | Kubeconfig is malformed or uses exec-based auth | Validate YAML syntax; replace exec auth with static token or certificate; ensure file is not double base64-encoded |
| Onboarded but 0 namespaces/services/pods discovered | Controller cannot reach the K8s API server, or RBAC is insufficient | Verify network path from Controller to cluster API; verify ClusterRole has get/list/watch on required resources |
| SmartGroup shows no members after onboarding | Resources not yet synced, or cluster uses overlay networking | Wait for initial sync (may take up to 60 seconds); overlay mode limits SmartGroups to Service destinations only |
| ”Overlay Networks have limited support” warning | Network mode is Overlay | Use flat networking if possible; see Platform Requirements |
| Status shows “Fail” with no useful error | Controller IAM role does not have an EKS access entry | Verify EKS access entry exists for the Controller’s role ARN; verify AmazonEKSViewPolicy is associated |