Overview
Aviatrix Distributed Cloud Firewall (DCF) for Kubernetes extends Zero Trust security to containerized workloads across AWS EKS, Azure AKS, Google GKE, and self-managed Kubernetes clusters. This integration provides identity-based security policies, secure egress control, and unified visibility for Kubernetes environments within the Aviatrix Cloud Native Security Fabric (CNSF).About DCF for Kubernetes
DCF for Kubernetes delivers application-aware, identity-based firewall protection for containerized workloads. Unlike traditional IP-based security approaches that struggle with Kubernetes’ dynamic nature, DCF uses Kubernetes-native constructs (namespaces, pods, services, labels) to enforce security policies that automatically adapt as your applications scale.Key Capabilities
- Identity-Based Security: Enforce firewall policies based on Kubernetes identities (namespace, pod, service) rather than ephemeral IP addresses. Policies automatically follow workloads as they scale, move, or restart
- Multicloud Kubernetes Security: Unified security policies across AWS EKS, Azure AKS, Google GKE, and self-managed clusters. Define security once, enforce everywhere
- Native Kubernetes Integration: Define firewall policies using Kubernetes Custom Resource Definitions (CRDs). Security policies are managed with the same
kubectland YAML workflows your teams already use for application deployments - Secure Egress Control: Prevent unauthorized outbound traffic from Kubernetes workloads. Control egress at namespace, pod, and cluster levels with domain-based filtering and application-aware policies
- Advanced NAT and IP Management: Resolve IP overlap and exhaustion issues across multiple Kubernetes clusters with advanced NAT capabilities. Enable seamless communication between clusters, VMs, and serverless functions
Benefits
- Consistent Multicloud Security: Apply the same security policies across all Kubernetes environments regardless of cloud provider
- Zero Trust for Workloads: Implement identity-based segmentation and deny-by-default policies
- Compliance and Audit-Ready: Meet PCI-DSS, HIPAA, SOC 2, and other compliance requirements with comprehensive logging and audit trails
- Native Kubernetes Workflows: Define policies using familiar Kubernetes YAML and manage them with
kubectl, Terraform, or GitOps workflows - Automatic Discovery: Aviatrix automatically discovers Kubernetes clusters across AWS, Azure, and GCP
- Dynamic Policy Enforcement: Policies based on Kubernetes labels and selectors automatically apply to new workloads as they deploy
- IP Conflict Resolution: Solve IP overlap issues between multiple clusters with advanced NAT
- Zero Workflow Disruption: Security policies are defined as Kubernetes resources
- Fast Policy Deployment: Deploy security policy changes in seconds using
kubectl apply
Architecture
Aviatrix DCF integrates with Kubernetes through the Cloud Asset Inventory service:- Discovers Kubernetes clusters across AWS, Azure, and GCP
- Monitors cluster resources (namespaces, pods, services, deployments)
- Synchronizes Kubernetes resource metadata with Aviatrix Controller
- Enforces firewall policies at the Aviatrix gateway level using Kubernetes identity information
Supported Kubernetes Distributions
- AWS Elastic Kubernetes Service (EKS)
- Azure Kubernetes Service (AKS)
- Google Kubernetes Engine (GKE)
- Private Kubernetes Clusters in cloud
Prerequisites
Cloud Provider Permissions
AWSeks:DescribeClustereks:ListClusters- IAM role for EKS cluster access
Microsoft.ContainerService/managedClusters/readMicrosoft.ContainerService/managedClusters/listClusterUserCredential/action
container.clusters.getcontainer.clusters.list
Network Connectivity
Aviatrix Controller must have network connectivity to Kubernetes API servers. For private clusters, see Private Kubernetes Clusters section.Enable DCF for Kubernetes
Starting from Controller 8.2, DCF policies for Kubernetes can be enabled through Controller UI and Terraform. Enable via Terraform:Onboard Kubernetes Clusters
Via CoPilot
Option 1: Manual Onboarding Using Kubeconfig File- Go to Cloud Resources > Cloud Assets > Kubernetes
- Click Onboard Cluster
- Upload the kubeconfig file
- Click Save
Via Terraform
For Managed Kubernetes Clusters (EKS/AKS/GKE):Example Kubeconfig File
Configure DCF Policies Using CRDs
Register CRD to Kubernetes Cluster
Register CRDs to the Kubernetes cluster using the Helm chart:Apply Firewall Policies
Apply firewall policies:Verify Firewall Policy Status
Check policy status:Deploying DCF on Private Kubernetes Clusters
Private Kubernetes clusters are container orchestration environments where the entire cluster, including the control plane (API server, etcd, scheduler, controller manager) and the worker nodes, is isolated within a private network. Private Control Plane Endpoint- The API server endpoint is not publicly accessible
- Cannot reach the Kubernetes API directly from the public internet
- Access is restricted to specific private networks
- Worker nodes reside in private subnets without public IP addresses
- Outbound internet access routed through NAT gateways or proxies within the private network
Onboarding Steps for Private Clusters
- Create a Spoke Gateway inside the VPC that contains the Aviatrix Controller
- Configure transit and DCF policies so that the VPC of the Controller can connect to the VPC of the Kubernetes cluster
- Configure security groups on the Controller so it can connect to the Kubernetes API servers
Create Spoke Gateway
Create a spoke gateway for the Controller VPC. The Controller will still run in a public subnet, but the spoke gateway can be used to connect private addresses in other spokes.Configure Transit and DCF Policies
- Ensure both the VPC of the Controller and the Kubernetes cluster are connected via a Transit
- Configure a DCF policy that allows the Controller to connect to the Kubernetes control plane:
Configure Security Groups
By default, the Terraform EKS module configures security groups so that only the worker nodes can connect to the API server. Additional rules need to be configured to allow the Controller to connect:Create Service Account
Create a new ServiceAccount in the private Kubernetes cluster that represents the Aviatrix Controller and contains all its permissions:Onboard the Private Kubernetes Cluster
For Controller Version 8.2:cluster_details are set correctly, including the region and vpc_id. Around 30 seconds after configuration, the Controller should show that it could connect successfully to the cluster. Note that both clusters will appear (the discovered cluster and the manually configured cluster), and it will show as PUBLIC in CoPilot even though it is a private cluster.
Configure DCF Policies for Private Clusters
Configure SmartGroups for Kubernetes workloads in this cluster. Note that the cluster ID must be the ID of the custom cluster, not the discovered cluster:Accessing Private Clusters
From a Machine in the VPC:- Ensure security group allows HTTPS (port 443) from your machine
- Configure kubectl:
- VPN connection to the VPC, OR
- Bastion host/jump server, OR
- AWS Systems Manager Session Manager