High Performance Transit Network - Insane Mode¶
This document discusses Aviatrix High Performance Transit Network and answers related questions.
Why is Transit VPC performance capped at 1.25Gbsp?¶
In the current Transit VPC solution, the throughput is capped at 1.25Gbps regardless if you have a 10Gbps Direct Connect (DX) link. The reason is that in the Transit VPC deployment there is an IPSEC session between VGW and Transit gateway and VGW has a performance limitation.
AWS VGW IPSEC has a published performance of 1.25Gbps. The truth is AWS is not alone, all cloud providers have that performance cap, in fact, all software based IPSEC VPN solutions have that performance cap.
Why is that?
Most virtual routers or software based routers are built with general purpose CPUs. Despite the vast CPU technology advancement, why does not IPSEC performance scale further?
It turns out the problem lies in the nature of tunneling, a common technique in networking to connect two end points.
When two general purpose server or virtual machine based routes are connected by an IPSEC tunnel, there is one UDP or ESP session going between the two machines, as shown below.
In the above diagram, the virtual router has multiple CPU cores, but since there is only one tunnel established, the Ethernet Interface can only direct incoming packets to a single core, thus the performance is limited to one CPU core, regardless how many CPU cores and memory you provide.
This is true not only for IPSEC, but also for all tunneling protocols, such as GRE and IPIP.
Aviatrix high performance Insane Mode Encryption¶
Aviatrix Insane Mode tunneling techniques establishes multiple tunnels between the two virtual routers, thus allowing all CPU cores to be used for performance scaling with the CPU resources, as shown below.
With Aviatrix Insane Mode tunneling, IPSEC encryption can achieve 10Gbps, 25Gbps and beyond, leveraging the multiple CPU cores in a single instance, VM or host.
What are the use cases for Insane Mode?¶
- 10Gbps Transit performance
- Encryption over Direct Connect
- Overcome VGW performance limit and 100 route limit
How can I deploy Aviatrix Insane Mode?¶
Aviatrix Insane mode is integrated into the Transit Network solution to provide 10Gbps performance between on-prem and Transit VPC with encryption. For VPC to VPC, Insane mode can achieve 20Gbps.
Insane mode can also be deployed in a flat (as opposed to Transit VPC) architecture for 10Gbps encryption.
The diagram below illustrates the high performance encryption between Transit VPC and on-prem, between Transit VPC and Spoke VPC.
Instance sizes and IPSEC Performance¶
Insane mode is available on AWS for C5 series and C5n series. For more performance test results and how to tune your environment to get the best performance, check out this document.
How does Insane Mode work?¶
When a gateway is launched with Insane Mode enabled, a new /26 public subnet is created where the Insane Mode gateway is launched on.
Insane Mode builds high performance encryption tunnel over private network links. The private network links are Direct Connect (DX) and AWS Peering (PCX).
For Insane Mode between two gateways, between Transit GW and Spoke gateway, or between Transit GW and Transit GW (Transit Peering), Aviatrix Controller automatically creates the underlying AWS Peering connection and builds the tunnels over it.
Since Insane Mode tunnels are over private network links, the VPC route architecture is described as below, where EC2 instances associated route entry to the remote site point to Aviatrix gateway, and the Aviatrix gateway instance associated route entry to remote site points to PCX or VGW.
Aviatrix hardware appliance¶
Aviatrix offers a 1U rack mountable hardware appliance deployed in the datacenter. It works with the Aviatrix gateway.
The Aviatrix appliance CloudN specification:
|Dimension||1U rack mount|
|Server||HPE ProLiant DL360 Gen10 Xeon Gold 6130|
|10Gbps Ethernet ports||2 x SFP+||1 LAN port and 1 WAN port|
|1Gbps Ethernet port||RJ45||1 Management port|
More information on HPE ProLiant DL360 Gen10 Server can be found here.
How to deploy Aviatrix hardware appliance?¶
Datacenter deployment is shown in the diagram below with redundancy, where R1 and R2 are two edge routers that connect to VGW over DX. R3 and R4 are two routers connect to inside the datacenter. Aviatrix CloudN also runs a BGP session with R3 and R4 to collect datacenter routes. VGW is only used to terminate DX. Aviatrix gateway and on-prem appliance CloudN run a BGP session to propagate on-prem routes to the Transit VPC. IPSEC tunnels are also built between the two.
A logical deployment layout is described as below.
Reference Deployment Diagrams¶
Single Aviatrix CloudN Appliance¶
And the sample configuration on an ISR is as follows.
Aviatrix CloudN Appliance with HA¶
Redundant DX Deployment¶
How to configure Insane Mode for Transit VPC?¶
At Step 1 Transit Network workflow select “Insane Mode Encryption”.
Pre-deployment Check List¶
Aviatrix support team configures and updates the software before shipping the appliance. Deployment topology for Aviatrix CloudN is as follows:
Please collect information requested below and provide to Aviatrix. Click the link here to download the application form.
|CloudN Interface||Private IP Address||Subnet Mask||Default Gateway||Primary DNS Server||Secondary DNS Server||Note|
|1- WAN||Not Required||Not Required||Not Required|
|2- LAN||Not Required||Not Required||Not Required|
|3- MGMT||Management port for CloudN configuration and software upgrade|
|4- HPE iLO (optional)||Not Required||Not Required||HP Integrated Lights-Out|
Aviatrix will pre-configure the IP addresses, subnet masks, default gateway and DNS servers on CloudN before shipping the unit.
CloudN appliance does not require public IP address, but the management port requires outbound internet access on the management port for software upgrade.
BGP is required between LAN port of the appliance and the on-prem router for route propagation.