Skip to main content

Overview

Onboarding a Kubernetes cluster creates a cluster configuration on the Aviatrix Controller that grants read-only access to discover Kubernetes namespaces, services, pods, endpoint slices, and nodes. This data powers SmartGroup-based security policies in the Distributed Cloud Firewall (DCF), letting you write firewall rules that reference Kubernetes workloads by label, namespace, or service name. Two things must happen before DCF can enforce policy on a cluster:
  1. Grant Controller access to the cluster’s Kubernetes API — the Controller must be able to reach the API server and authenticate with sufficient RBAC permissions.
  2. Register the cluster — tell the Controller which cluster to watch by completing the onboarding workflow in the UI or via Terraform.

Onboarding Paths

PathWhen to UseSupported Clouds
Auto-discovered + CSP credentialsManaged clusters (EKS/AKS) visible on the Kubernetes Clusters tabAWS, Azure
Auto-discovered + kubeconfigManaged clusters where the CSP credential path is not viableAWS, Azure
Manual onboardingSelf-managed, private, or undiscovered clustersAWS, Azure, GCP
Terraform (full automation)CI/CD, IaC, multi-cluster at scaleAll
The Kubernetes Clusters tab is only visible when DCF is enabled and Kubernetes Resource Discovery is turned on under Groups > Settings. See Kubernetes Resource Discovery for setup instructions.

Prerequisites

Platform Requirements

Before onboarding any cluster, confirm the following:
  • DCF is enabled. See Enable Distributed Cloud Firewall.
  • Kubernetes Resource Discovery is enabled under Groups > Settings. See Kubernetes Resource Discovery.
  • The Kubernetes API server must be network-reachable from the Controller.
    • Public clusters: restrict API server access to the Controller’s IP address for security.
    • Private clusters: a private network path is required. See Onboarding Private Kubernetes Clusters.
  • SNAT must be disabled on worker nodes so that pod source IPs are preserved for policy enforcement. See the AWS documentation on external SNAT.
  • Flat networking is recommended (this is the default for EKS and AKS). Overlay networks have limited SmartGroup support.
When a cluster uses an overlay network, only Service-type SmartGroups are supported. These SmartGroups can only be used as the destination in DCF rules, and they only control connections through the Load Balancer.

IAM Permissions

The Controller’s cloud account must include the following permissions for cluster discovery and node-level enforcement.
Discovery
  • eks:ListClusters
  • eks:DescribeCluster
  • elasticloadbalancing:DescribeLoadBalancers
  • elasticloadbalancing:DescribeTags
SmartGroup node enforcement
  • ec2:DescribeSecurityGroups
  • ec2:DescribeInstances

Kubernetes RBAC

The AWS-managed AmazonEKSViewPolicy grants read access to most Kubernetes resources but does not include nodes. A supplemental ClusterRole is required so the Controller can discover node metadata for SmartGroup enforcement. Create a file called cluster-role.yaml with the following content:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: view-nodes
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: view-nodes-binding
subjects:
  - kind: Group
    name: view-nodes
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: view-nodes
  apiGroup: rbac.authorization.k8s.io
Apply it to your cluster:
kubectl apply -f cluster-role.yaml
This ClusterRole and ClusterRoleBinding can also be managed as Terraform resources using kubernetes_cluster_role and kubernetes_cluster_role_binding, which is useful for automated or multi-cluster deployments.

Kubeconfig Requirements

When onboarding via kubeconfig file, the file must meet the following requirements:
  • Allowed verbs: GET, LIST, and WATCH on namespaces, services, pods, nodes, and endpointslices.
  • No exec-based authentication — the kubeconfig must not rely on exec plugins (such as aws eks get-token) because the Controller cannot execute external binaries.
  • No create, update, or delete permissions — the Controller only needs read access.
  • Exactly one cluster, user, and context — multi-context kubeconfig files are not supported.
  • Inline credential data only — use certificate-authority-data, client-certificate-data, and client-key-data (base64-encoded) rather than file path references (certificate-authority, client-certificate, client-key, tokenFile). The Controller cannot read files from the local filesystem.
apiVersion: v1
kind: Config
clusters:
  - name: my-cluster
    cluster:
      server: https://ABCDEF1234567890.gr7.us-east-1.eks.amazonaws.com
      certificate-authority-data: LS0tLS1CRUdJTi...base64-encoded-ca-cert...LS0tLS1FTkQ=
contexts:
  - name: my-cluster-context
    context:
      cluster: my-cluster
      user: my-cluster-user
users:
  - name: my-cluster-user
    user:
      client-certificate-data: LS0tLS1CRUdJTi...base64-encoded-client-cert...LS0tLS1FTkQ=
      client-key-data: LS0tLS1CRUdJTi...base64-encoded-client-key...LS0tLS1FTkQ=
current-context: my-cluster-context
preferences: {}

Understanding the Access Model

Each cloud provider has its own authentication and authorization layers between the Aviatrix Controller and the Kubernetes API server. Before configuring access, it helps to understand what each layer does and why it is required.
EKS access requires two layers working together:
LayerResourceWhat It Provides
AWS IAMEKS Access Entry + AmazonEKSViewPolicyAuthenticates the Controller’s IAM role to the EKS cluster and grants read access to namespaces, services, pods, and endpoint slices
Kubernetes RBACview-nodes ClusterRole + ClusterRoleBindingGrants read access to nodes (not covered by AmazonEKSViewPolicy)
The Controller authenticates to EKS by requesting a short-lived STS token on each API call. There is no long-lived credential to rotate.Once both layers are in place, register the cluster so the Controller begins discovering Kubernetes resources.What is the Controller’s IAM role ARN?The Controller authenticates to AWS using the aviatrix-role-app IAM role from the AWS account you onboarded in CoPilot. The ARN follows this format:
arn:aws:iam::123456789012:role/aviatrix-role-app
This is the same IAM role the Controller uses for all AWS API operations (launching gateways, managing route tables, and so on). You can find the role ARN in two places:
  • CoPilot: navigate to Cloud Resources > Cloud Account and locate the AWS account. The role ARN is listed in the account details.
  • AWS IAM Console: search for aviatrix-role-app under Roles.

Onboarding via Terraform

Terraform provides the most complete and repeatable onboarding path. A single terraform apply can grant Controller access and register the cluster in one step.
EKS requires an EKS Access Entry (IAM layer) and a Kubernetes ClusterRole (RBAC layer), followed by cluster registration. The configuration below handles all three.Requires the aws, kubernetes, and aviatrix Terraform providers configured for the target cluster.
variable "controller_role_arn" {
  description = "Aviatrix Controller IAM role ARN (aviatrix-role-app)"
  type        = string
}

variable "cluster_name" {
  description = "EKS cluster name"
  type        = string
}

data "aws_eks_cluster" "this" {
  name = var.cluster_name
}

# --- Grant Controller access ---

# EKS access entry — authenticates Controller IAM role to the cluster
resource "aws_eks_access_entry" "aviatrix" {
  cluster_name      = var.cluster_name
  principal_arn     = var.controller_role_arn
  kubernetes_groups = ["view-nodes"]
  type              = "STANDARD"
}

# Attach the EKS managed view policy for broad read access
resource "aws_eks_access_policy_association" "aviatrix" {
  cluster_name  = var.cluster_name
  policy_arn    = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy"
  principal_arn = var.controller_role_arn

  access_scope {
    type = "cluster"
  }

  depends_on = [aws_eks_access_entry.aviatrix]
}

# ClusterRole for node visibility (not covered by AmazonEKSViewPolicy)
resource "kubernetes_cluster_role" "view_nodes" {
  metadata {
    name = "view-nodes"
  }

  rule {
    verbs      = ["get", "list", "watch"]
    api_groups = [""]
    resources  = ["nodes"]
  }
}

# Bind the ClusterRole to the "view-nodes" group
resource "kubernetes_cluster_role_binding" "view_nodes" {
  metadata {
    name = "view-nodes"
  }

  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = kubernetes_cluster_role.view_nodes.metadata[0].name
  }

  subject {
    kind      = "Group"
    name      = "view-nodes"
    api_group = "rbac.authorization.k8s.io"
  }
}

# --- Register the cluster ---

resource "aviatrix_kubernetes_cluster" "this" {
  cluster_id          = data.aws_eks_cluster.this.arn
  use_csp_credentials = true

  depends_on = [
    aws_eks_access_policy_association.aviatrix,
    kubernetes_cluster_role_binding.view_nodes,
  ]
}
The cluster_id must be the full EKS ARN — for example, arn:aws:eks:us-east-2:123456789012:cluster/my-cluster.

Custom or Self-Managed Clusters

For clusters built with kops, kubeadm, k3s, Rancher, or similar tools, provide a kubeconfig and cluster details:
resource "aviatrix_kubernetes_cluster" "custom" {
  cluster_id  = "my-cluster-id"
  kube_config = var.kubeconfig

  cluster_details {
    account_name           = data.aviatrix_account.aws.account_name
    account_id             = data.aviatrix_account.aws.aws_account_number
    name                   = "my-cluster"
    region                 = "us-east-2"
    vpc_id                 = data.aws_vpc.vpc.id
    is_publicly_accessible = true
    platform               = "kops"
    version                = "1.30"
    network_mode           = "FLAT"
    tags = {
      environment = "production"
    }
  }
}
ArgumentRequiredDescription
account_nameYesAviatrix cloud account name
account_idYesCloud account ID (AWS account number, Azure subscription ID)
nameYesDisplay name for the cluster
regionYesCloud region
vpc_idYesVPC/VNet ID. AWS: vpc-xxx. Azure: full resource ID.
is_publicly_accessibleYesWhether the K8s API server is publicly reachable
platformYesFree-form string — for example, kops, kubeadm, k3s, rancher
versionYesKubernetes version string
network_modeYesFLAT or OVERLAY
projectNoGCP project ID
compartmentNoOCI compartment ID
tagsNoKey-value metadata map

Multi-Cluster Automation

Use for_each to onboard multiple clusters in a single apply:
variable "eks_clusters" {
  description = "Map of logical name to EKS cluster name"
  type        = map(string)
}

data "aws_eks_cluster" "clusters" {
  for_each = var.eks_clusters
  name     = each.value
}

resource "aviatrix_kubernetes_cluster" "clusters" {
  for_each            = data.aws_eks_cluster.clusters
  cluster_id          = each.value.arn
  use_csp_credentials = true
}

Onboarding via CoPilot UI

For clusters discovered via cloud APIs, navigate to Cloud Resources > Cloud Assets > Kubernetes Clusters and click Onboard next to the cluster.
The Onboard Cluster dialog offers three options:
  1. Terraform — Displays generated HCL for the EKS access entry and RBAC resources. Copy the script, apply it with terraform apply, check the confirmation box, then click Onboard.
  2. Command Line — Displays generated eksctl YAML and kubectl YAML. Apply both configurations, check the confirmation box, then click Onboard.
  3. Upload Kubeconfig — Upload a kubeconfig file that meets the kubeconfig requirements. Click Onboard.
Options 1 and 2 handle both granting access and registration in one flow. Option 3 assumes the kubeconfig already has sufficient permissions.

Manual Onboarding

Use manual onboarding when the cluster was built with kops, kubeadm, k3s, Rancher, or similar tools, or when the cluster is not discoverable via cloud APIs.
The cluster must reside in a VPC/VNet in a supported cloud (AWS, Azure, or GCP). On-premises bare-metal clusters cannot be onboarded.
Navigate to Cloud Resources > Cloud Assets > Kubernetes Clusters and click Manually Onboard a Cluster.
FieldDescription
NameDisplay name for the cluster. Becomes part of the generated cluster ID.
CloudAWS, Azure, or GCP. Determines which accounts, regions, and VPCs are available.
AccountCloud account where the cluster resides. Must already be onboarded in CoPilot.
RegionCloud region.
VPC/VNetVPC or VNet where the cluster nodes run.
Network ModeFlat (recommended) or Overlay. Overlay shows a warning about limited DCF support.
Kubeconfig FileUpload a kubeconfig file that meets the kubeconfig requirements.
Click Onboard.

Onboarding via CLI (AWS Only)

If you prefer CLI tools over Terraform, the following methods grant Controller access to an EKS cluster and prepare it for onboarding. After completing any of these methods, register the cluster through the CoPilot UI or Terraform.
Step 1 — Create the EKS access entry. Save the following as accessentry.yaml, replacing the placeholder values:
kind: ClusterConfig
apiVersion: eksctl.io/v1alpha5
metadata:
  name: CLUSTER_NAME
  region: REGION
accessConfig:
  accessEntries:
    - principalARN: "CONTROLLER_ROLE_ARN"
      kubernetesGroups:
        - view-nodes
      accessPolicies:
        - policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy
          accessScope:
            type: cluster
Apply it:
eksctl create accessentry -f accessentry.yaml
Step 2 — Create the Kubernetes RBAC. Apply the cluster-role.yaml from the Kubernetes RBAC section:
kubectl config current-context
kubectl apply -f cluster-role.yaml
Tool installation:
Step 1 — Create the EKS access entry:
aws eks create-access-entry \
  --cluster-name CLUSTER_NAME \
  --principal-arn CONTROLLER_ROLE_ARN \
  --kubernetes-groups view-nodes \
  --type STANDARD
Step 2 — Associate the EKS view policy:
aws eks associate-access-policy \
  --cluster-name CLUSTER_NAME \
  --principal-arn CONTROLLER_ROLE_ARN \
  --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy \
  --access-scope type=cluster
Step 3 — Create the Kubernetes RBAC for node access:
kubectl apply -f cluster-role.yaml
Use the cluster-role.yaml from the Kubernetes RBAC section.
CloudFormation can manage the EKS access entry. Save the following as aviatrix-eks-access.yaml:
AWSTemplateFormatVersion: '2010-09-09'
Description: Grant Aviatrix Controller read access to an EKS cluster
Parameters:
  ClusterName:
    Type: String
    Description: Name of the EKS cluster
  ControllerRoleArn:
    Type: String
    Description: IAM role ARN of the Aviatrix Controller (aviatrix-role-app)

Resources:
  AviatrixAccessEntry:
    Type: AWS::EKS::AccessEntry
    Properties:
      ClusterName: !Ref ClusterName
      PrincipalArn: !Ref ControllerRoleArn
      KubernetesGroups:
        - view-nodes
      Type: STANDARD
Deploy the stack:
aws cloudformation deploy \
  --template-file aviatrix-eks-access.yaml \
  --stack-name aviatrix-eks-access-CLUSTER_NAME \
  --parameter-overrides \
    ClusterName=CLUSTER_NAME \
    ControllerRoleArn=CONTROLLER_ROLE_ARN
Then associate the access policy via CLI (CloudFormation may not support AWS::EKS::AccessPolicyAssociation):
aws eks associate-access-policy \
  --cluster-name CLUSTER_NAME \
  --principal-arn CONTROLLER_ROLE_ARN \
  --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy \
  --access-scope type=cluster
Finally, create the Kubernetes RBAC:
kubectl apply -f cluster-role.yaml
Method comparison:
MethodTools RequiredManages EKS Access EntryManages K8s RBACSingle Command
Terraformterraform + AWS/K8s providersYesYesYes (terraform apply)
eksctl + kubectleksctl, kubectlYesYesNo (2 commands)
AWS CLI + kubectlaws CLI, kubectlYesYesNo (3 commands)
CloudFormation + kubectlCF, aws CLI, kubectlPartial (may need CLI for policy)NoNo (2-3 steps)

Post-Onboarding

Verifying Onboarding Status

After onboarding, the cluster status on the Kubernetes Clusters tab transitions through:
StatusIconMeaning
NoGrayDiscovered but not onboarded
OnboardingOrangeController is establishing connectivity and beginning resource discovery
YesGreenOnboarded and resources are being actively discovered
FailRedOnboarding failed — hover for error details, click Retry

Creating Kubernetes SmartGroups

Once the cluster status is Yes, you can create SmartGroups from Kubernetes resources. Quick creation from the Kubernetes Clusters tab:
  1. Click Create SmartGroup on an onboarded cluster.
  2. Choose One SmartGroup per Namespace or One SmartGroup per Service.
  3. Select the namespaces or services to include.
  4. Click Create.
SmartGroup filter properties:
PropertyDescription
k8s_cluster_idCluster ID (ARN for EKS, resource ID for AKS, self_link for GKE)
k8s_namespaceNamespace name
k8s_serviceService name
Custom K8s node labelsFor example, environment=production
Service and Label filters cannot be combined in the same SmartGroup rule.
Kubernetes SmartGroups can be used as source or destination in DCF rules like any other SmartGroup.

Editing and Offboarding

Editing a Cluster

You can only edit the name of manually onboarded clusters.

Offboarding a Cluster

Offboarding removes the cluster configuration from the Controller. Resource discovery stops and SmartGroup resources are no longer updated.
  • CoPilot UI: Click Offboard next to the cluster and confirm.
  • Terraform: Remove the aviatrix_kubernetes_cluster resource and run terraform apply.
You cannot offboard a cluster that is referenced by SmartGroups. Remove the SmartGroup references first.
Offboarding does not remove EKS access entries, Kubernetes ClusterRoles, ClusterRoleBindings, or any IAM resources. Clean those up separately if they are no longer needed.

Troubleshooting

SymptomCauseResolution
Cluster not appearing on K8s Clusters tabDiscovery not enabled or cloud account missing permissionsEnable K8s Resource Discovery; verify cloud account has required IAM permissions
Onboarding fails with “invalid content”Kubeconfig is malformed or uses exec-based authValidate YAML syntax; replace exec auth with static token or certificate; ensure file is not double base64-encoded
Onboarded but 0 namespaces/services/pods discoveredController cannot reach the K8s API server, or RBAC is insufficientVerify network path from Controller to cluster API; verify ClusterRole has get/list/watch on required resources
SmartGroup shows no members after onboardingResources not yet synced, or cluster uses overlay networkingWait for initial sync (may take up to 60 seconds); overlay mode limits SmartGroups to Service destinations only
”Overlay Networks have limited support” warningNetwork mode is OverlayUse flat networking if possible; see Platform Requirements
Status shows “Fail” with no useful errorController IAM role does not have an EKS access entryVerify EKS access entry exists for the Controller’s role ARN; verify AmazonEKSViewPolicy is associated