Distributed Cloud Firewall for Kubernetes
Controller 8.2
Overview
Aviatrix Distributed Cloud Firewall (DCF) for Kubernetes extends Zero Trust security to containerized workloads across AWS EKS, Azure AKS, Google GKE, and self-managed Kubernetes clusters. This integration provides identity-based security policies, secure egress control, and unified visibility for Kubernetes environments within the Aviatrix Cloud Native Security Fabric (CNSF).
About DCF for Kubernetes
DCF for Kubernetes delivers application-aware, identity-based firewall protection for containerized workloads. Unlike traditional IP-based security approaches that struggle with Kubernetes' dynamic nature, DCF uses Kubernetes-native constructs (namespaces, pods, services, labels) to enforce security policies that automatically adapt as your applications scale.
Key Capabilities
-
Identity-Based Security: Enforce firewall policies based on Kubernetes identities (namespace, pod, service) rather than ephemeral IP addresses. Policies automatically follow workloads as they scale, move, or restart
-
Multicloud Kubernetes Security: Unified security policies across AWS EKS, Azure AKS, Google GKE, and self-managed clusters. Define security once, enforce everywhere
-
Native Kubernetes Integration: Define firewall policies using Kubernetes Custom Resource Definitions (CRDs). Security policies are managed with the same
kubectland YAML workflows your teams already use for application deployments -
Secure Egress Control: Prevent unauthorized outbound traffic from Kubernetes workloads. Control egress at namespace, pod, and cluster levels with domain-based filtering and application-aware policies
-
Advanced NAT and IP Management: Resolve IP overlap and exhaustion issues across multiple Kubernetes clusters with advanced NAT capabilities. Enable seamless communication between clusters, VMs, and serverless functions
Benefits
-
Consistent Multicloud Security: Apply the same security policies across all Kubernetes environments regardless of cloud provider
-
Zero Trust for Workloads: Implement identity-based segmentation and deny-by-default policies
-
Compliance and Audit-Ready: Meet PCI-DSS, HIPAA, SOC 2, and other compliance requirements with comprehensive logging and audit trails
-
Native Kubernetes Workflows: Define policies using familiar Kubernetes YAML and manage them with
kubectl, Terraform, or GitOps workflows -
Automatic Discovery: Aviatrix automatically discovers Kubernetes clusters across AWS, Azure, and GCP
-
Dynamic Policy Enforcement: Policies based on Kubernetes labels and selectors automatically apply to new workloads as they deploy
-
IP Conflict Resolution: Solve IP overlap issues between multiple clusters with advanced NAT
-
Zero Workflow Disruption: Security policies are defined as Kubernetes resources
-
Fast Policy Deployment: Deploy security policy changes in seconds using
kubectl apply
Architecture
Aviatrix DCF integrates with Kubernetes through the Cloud Asset Inventory service:
-
Discovers Kubernetes clusters across AWS, Azure, and GCP
-
Monitors cluster resources (namespaces, pods, services, deployments)
-
Synchronizes Kubernetes resource metadata with Aviatrix Controller
-
Enforces firewall policies at the Aviatrix gateway level using Kubernetes identity information
Prerequisites
Enable DCF for Kubernetes
Starting from Controller 8.2, DCF policies for Kubernetes can be enabled through Controller UI and Terraform.
Enable via Terraform:
resource "aviatrix_k8s_config" "test" {
enable_k8s = true
enable_dcf_policies = true
}
Enable via CoPilot UI:
Go to Cloud Resources > Cloud Assets > Kubernetes and enable Kubernetes discovery and DCF policies.
After enabling discovery, clusters (AKS/EKS/GKE) are shown under Cloud Resources > Cloud Assets > Kubernetes.
Onboard Kubernetes Clusters
Via CoPilot
Option 1: Manual Onboarding Using Kubeconfig File
-
Go to Cloud Resources > Cloud Assets > Kubernetes
-
Click Onboard Cluster
-
Upload the kubeconfig file
-
Click Save
Option 2: Onboard Discovered Clusters
If the cluster is discovered but not onboarded, use kubeconfig, Terraform, or CLI commands to onboard.
Ensure the access account has the required permissions to access the cluster.
Via Terraform
For Managed Kubernetes Clusters (EKS/AKS/GKE):
resource "aviatrix_kubernetes_cluster" "eks_cluster" {
cluster_id = data.aws_eks_cluster.eks_cluster.arn
use_csp_credentials = true
}
data "aws_eks_cluster" "eks_cluster" {
name = "mycluster"
}
For Custom Built Clusters:
data "aws_vpc" "vpc" {
tags = {
Name = "spoke-east-2-vpc"
}
}
data "aviatrix_account" "aws" {
account_name = "aws"
}
resource "aviatrix_kubernetes_cluster" "my_cluster" {
cluster_id = "my-cluster-id"
kube_config = var.kubeconfig
cluster_details {
account_name = data.aviatrix_account.aws.account_name
account_id = data.aviatrix_account.aws.aws_account_number
name = "my_cluster"
region = "us-east-2"
vpc_id = data.aws_vpc.vpc.id
is_publicly_accessible = true
platform = "kops"
version = "1.30"
network_mode = "FLAT"
tags = {
"type" = "prod"
}
}
}
Example Kubeconfig File
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: $CA_DATA
server: $ENDPOINT
name: private-cluster
contexts:
- context:
cluster: private-cluster
user: private-cluster
name: private-cluster
current-context: private-cluster
preferences: {}
users:
- name: private-cluster
user:
token: $TOKEN
Obtain the attributes:
TOKEN=$(kubectl get secret -n kube-system <service-account-name> -o jsonpath='{.data.token}'|base64 -d)
aws eks describe-cluster --name test-cluster --query cluster.certificateAuthority.data
aws eks describe-cluster --name test-cluster --query cluster.endpoint
Configure DCF Policies Using CRDs
Register CRD to Kubernetes Cluster
Register CRDs to the Kubernetes cluster using the Helm chart:
helm install --repo https://aviatrixsystems.github.io/k8s-firewall-charts k8s-firewall k8s-firewall
Verify the CRD registration:
kubectl get crds
Expected output:
NAME CREATED AT
applicationnetworkpolicies.networking.k8s.aws 2025-11-14T07:11:55Z
clusternetworkpolicies.networking.k8s.aws 2025-11-14T07:11:55Z
clusterpolicyendpoints.networking.k8s.aws 2025-11-14T07:11:55Z
cninodes.vpcresources.k8s.aws 2025-11-14T07:11:55Z
eniconfigs.crd.k8s.amazonaws.com 2025-11-14T07:13:23Z
firewallpolicies.networking.aviatrix.com 2025-11-14T07:19:03Z
policyendpoints.networking.k8s.aws 2025-11-14T07:11:55Z
securitygrouppolicies.vpcresources.k8s.aws 2025-11-14T07:11:55Z
webgrouppolicies.networking.aviatrix.com 2025-11-14T07:19:05Z
Apply Firewall Policies
Apply firewall policies:
kubectl apply -f <file name>
Example firewall policy:
kind: FirewallPolicy
apiVersion: networking.aviatrix.com/v1alpha1
metadata:
name: test-firewall-policy
namespace: dev
spec:
rules:
- name: test
logging: true
selector:
matchLabels:
app: dev-pods
action: permit
protocol: any
destinationSmartGroups:
- name: anywhere
webGroups:
- name: checkip
webGroups:
- name: checkip
domains:
- "www.google.com"
smartGroups:
- name: anywhere
selectors:
- cidr: 0.0.0.0/0
Verify Firewall Policy Status
Check policy status:
kubectl get events -n dev
Example output:
LAST SEEN TYPE REASON OBJECT MESSAGE
57m Normal UpdatePolicyListSuccess firewallpolicy/test-firewall-policy Updated policy list for firewall policy with UUID e006b5fe-bf73-46f7-93b8-d76e7325e2eb
113s Normal UpdatePolicyListSuccess firewallpolicy/test-firewall-policy Updated policy list for firewall policy with UUID e006b5fe-bf73-46f7-93b8-d76e7325e2eb
kubectl get events
Example output:
LAST SEEN TYPE REASON OBJECT MESSAGE
41s Normal CreateWebGroupSuccess webgrouppolicy/pod-to-web Created webgroup with name webgrouppolicy-default--pod-to-web--58c5fb5e
41s Normal CreateSmartGroupSuccess webgrouppolicy/pod-to-web Created smartgroup with name webgrouppolicy-target-default--pod-to-web--58c5fb5e
41s Normal CreatePolicySuccess webgrouppolicy/pod-to-web Updated policy webgrouppolicy-default-pod-to-web-58c5fb5e
Deploying DCF on Private Kubernetes Clusters
Private Kubernetes clusters are container orchestration environments where the entire cluster, including the control plane (API server, etcd, scheduler, controller manager) and the worker nodes, is isolated within a private network.
Private Control Plane Endpoint
-
The API server endpoint is not publicly accessible
-
Cannot reach the Kubernetes API directly from the public internet
-
Access is restricted to specific private networks
Private Worker Nodes
-
Worker nodes reside in private subnets without public IP addresses
-
Outbound internet access routed through NAT gateways or proxies within the private network
Configure private cluster endpoint access:
# Configure private cluster - disable public access, enable private access
cluster_endpoint_public_access = false
cluster_endpoint_private_access = true
Onboarding Steps for Private Clusters
-
Create a Spoke Gateway inside the VPC that contains the Aviatrix Controller
-
Configure transit and DCF policies so that the VPC of the Controller can connect to the VPC of the Kubernetes cluster
-
Configure security groups on the Controller so it can connect to the Kubernetes API servers
Create Spoke Gateway
Create a spoke gateway for the Controller VPC. The Controller will still run in a public subnet, but the spoke gateway can be used to connect private addresses in other spokes.
Configure Transit and DCF Policies
-
Ensure both the VPC of the Controller and the Kubernetes cluster are connected via a Transit
-
Configure a DCF policy that allows the Controller to connect to the Kubernetes control plane:
resource "aviatrix_smart_group" "controller" {
name = "controller"
selector {
match_expressions {
type = "vm"
name = "controller-instance"
}
}
}
resource "aviatrix_web_group" "private_cluster" {
name = "private-cluster"
selector {
match_expressions {
snifilter = "B3CA515413BF577BD988684D6FED4D01.gr7.us-east-2.eks.amazonaws.com"
}
}
}
resource "aviatrix_distributed_firewalling_policy_list" "test" {
policies {
name = "k8s-controller"
priority = 3
action = "PERMIT"
protocol = "ANY"
src_smart_groups = [
aviatrix_smart_group.controller.uuid,
]
dst_smart_groups = ["def000ad-0000-0000-0000-000000000000"]
web_groups = [
aviatrix_web_group.private_cluster.uuid,
]
}
}
Obtain the hostname for the webgroup from the cluster endpoint URL:
aws eks describe-cluster --name private-cluster --query cluster.endpoint
Configure Security Groups
By default, the Terraform EKS module configures security groups so that only the worker nodes can connect to the API server. Additional rules need to be configured to allow the Controller to connect:
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"
# ...
cluster_security_group_additional_rules = {
avx_controller = {
cidr_blocks = ["10.0.0.28/32"] # Address of the controller
description = "Allow all traffic from Aviatrix controller"
from_port = 0
to_port = 0
protocol = "all"
type = "ingress"
}
}
}
Create Service Account
Create a new ServiceAccount in the private Kubernetes cluster that represents the Aviatrix Controller and contains all its permissions:
apiVersion: v1
kind: ServiceAccount
metadata:
name: avx-controller
namespace: kube-system
---
apiVersion: v1
kind: Secret
metadata:
name: avx-controller
namespace: kube-system
annotations:
kubernetes.io/service-account.name: avx-controller
type: kubernetes.io/service-account-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: avx-controller
rules:
- verbs:
- get
- list
- watch
apiGroups:
- ""
resources:
- pods
- services
- namespaces
- nodes
- verbs:
- get
- list
- watch
apiGroups:
- discovery.k8s.io
resources:
- endpointslices
- verbs:
- "*"
apiGroups:
- networking.aviatrix.com
resources:
- "*"
- verbs:
- create
- patch
apiGroups:
- events.k8s.io
resources:
- events
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: avx-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: avx-controller
subjects:
- kind: ServiceAccount
name: avx-controller
namespace: kube-system
Onboard the Private Kubernetes Cluster
For Controller Version 8.2:
resource "aviatrix_kubernetes_cluster" "eks_cluster" {
cluster_id = data.aws_eks_cluster.eks_cluster.arn
use_csp_credentials = true
}
For Controller Version 8.1:
After creating the Service Account, build a kubeconfig file with the following attributes:
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: $CA_DATA
server: $ENDPOINT
name: private-cluster
contexts:
- context:
cluster: private-cluster
user: private-cluster
name: private-cluster
current-context: private-cluster
preferences: {}
users:
- name: private-cluster
user:
token: $TOKEN
Extract the attributes:
TOKEN=$(kubectl get secret -n kube-system avx-controller -o jsonpath='{.data.token}'|base64 -d)
aws eks describe-cluster --name test-cluster --query cluster.certificateAuthority.data
aws eks describe-cluster --name test-cluster --query cluster.endpoint
Onboard the cluster as a new Kubernetes cluster, even if the Controller will discover it. The Controller currently does not reach out to existing private clusters, so a new cluster must be created:
data "local_file" "kubeconfig" {
filename = "./private_kubeconfig.yaml"
}
data "aviatrix_account" "aws_account" {
account_name = var.aviatrix_aws_access_account
}
resource "aviatrix_kubernetes_cluster" "my_private_cluster" {
# This must be a different id than the actual id of the discovered cluster
# (ARN in case of EKS)
cluster_id = "my-cluster-id"
kube_config = data.local_file.kubeconfig.content
cluster_details {
account_name = data.aviatrix_account.aws_account.account_name
account_id = data.aviatrix_account.aws_account.aws_account_number
name = "my_private_cluster"
region = "us-east-2"
vpc_id = aws_vpc.eks_cluster_vpc.id
is_publicly_accessible = true
platform = "EKS"
version = "1.30"
network_mode = "FLAT"
}
}
Ensure the cluster_details are set correctly, including the region and vpc_id. Around 30 seconds after configuration, the Controller should show that it could connect successfully to the cluster. Note that both clusters will appear (the discovered cluster and the manually configured cluster), and it will show as PUBLIC in CoPilot even though it is a private cluster.
Configure DCF Policies for Private Clusters
Configure SmartGroups for Kubernetes workloads in this cluster. Note that the cluster ID must be the ID of the custom cluster, not the discovered cluster:
resource "aviatrix_smart_group" "k8s_nodes" {
name = "k8s-node"
selector {
match_expressions {
type = "k8s_node"
k8s_cluster_id = "my-cluster-id"
}
}
}
resource "aviatrix_smart_group" "k8s_pods" {
name = "k8s-pods"
selector {
match_expressions {
type = "k8s"
k8s_cluster_id = "my-cluster-id"
k8s_namespace = "test-namespace"
}
}
}
Accessing Private Clusters
From a Machine in the VPC:
-
Ensure security group allows HTTPS (port 443) from your machine
-
Configure kubectl:
aws eks update-kubeconfig --region us-east-2 --name eks-priv-cnsf-test-cluster
kubectl get nodes
From Outside the VPC:
For a private cluster, you need:
-
VPN connection to the VPC, OR
-
Bastion host/jump server, OR
-
AWS Systems Manager Session Manager