High Performance Transit Network - Insane Mode

This document discusses Aviatrix High Performance Transit Network and answers related questions.

Why is Transit VPC performance capped at 1.25Gbsp?

In the current Transit VPC solution, the throughput is capped at 1.25Gbps regardless if you have a 10Gbps Direct Connect (DX) link. The reason is that in the Transit VPC deployment there is an IPSEC session between VGW and Transit gateway and VGW has a performance limitation.

AWS VGW IPSEC has a published performance of 1.25Gbps. The truth is AWS is not alone, all cloud providers have that performance cap, in fact, all software based IPSEC VPN solutions have that performance cap.

Why is that?

Most virtual routers or software based routers are built with general purpose CPUs. Despite the vast CPU technology advancement, why does not IPSEC performance scale further?

It turns out the problem lies in the nature of tunneling, a common technique in networking to connect two end points.

When two general purpose server or virtual machine based routes are connected by an IPSEC tunnel, there is one UDP or ESP session going between the two machines, as shown below.

tunnel_diagram

In the above diagram, the virtual router has multiple CPU cores, but since there is only one tunnel established, the Ethernet Interface can only direct incoming packets to a single core, thus the performance is limited to one CPU core, regardless how many CPU cores and memory you provide.

This is true not only for IPSEC, but also for all tunneling protocols, such as GRE and IPIP.

Aviatrix high performance Insane Mode Encryption

Aviatrix Insane Mode tunneling techniques establishes multiple tunnels between the two virtual routers, thus allowing all CPU cores to be used for performance scaling with the CPU resources, as shown below.

insane_tunnel_diagram

With Aviatrix Insane Mode tunneling, IPSEC encryption can achieve 10Gbps, 25Gbps and beyond, leveraging the multiple CPU cores in a single instance, VM or host.

What are the use cases for Insane Mode?

  • 10Gbps Transit performance
  • Encryption over Direct Connect
  • Overcome VGW performance limit and 100 route limit

How can I deploy Aviatrix Insane Mode?

Aviatrix Insane mode is integrated into the Transit Network solution to provide 10Gbps performance between on-prem and Transit VPC with encryption. For VPC to VPC, Insane mode can achieve 20Gbps.

Insane mode can also be deployed in a flat (as opposed to Transit VPC) architecture for 10Gbps encryption.

The diagram below illustrates the high performance encryption between Transit VPC and on-prem, between Transit VPC and Spoke VPC.

insane_transit

Instance sizes and IPSEC Performance

Insane mode is available on AWS for C5 series.

MTU size C5.9xlarge C5.18xlarge
1500 8.21Gbps 9Gbps
9000 12Gbps 22Gbps

What is the Aviatrix hardware appliance?

Aviatrix offers a 1U rack mountable hardware appliance deployed in the datacenter. It works with the Aviatrix gateway.

The Aviatrix appliance CloudN specification:

Aviatrix CloudN Specification Notes
Dimension 1U rack mount  
Server HPE ProLiant DL360 Gen10 Server  
CPU 8 cores  
Memory 16GB  
PCIe 3.0  
10Gbps Ethernet ports 2 1 LAN port and 1 WAN port
1Gbps Ethernet port 4 1 Management port

More information on HPE ProLiant DL360 Gen10 Server can be found here.

How to deploy Aviatrix hardware appliance?

Datacenter deployment is shown in the diagram below with redundancy, where R1 and R2 are two edge routers that connect to VGW over DX. R3 and R4 are two routers connect to inside the datacenter. Aviatrix CloudN also runs a BGP session with R3 and R4 to collect datacenter routes. VGW is only used to terminate DX. Aviatrix gateway and on-prem appliance CloudN run a BGP session to propagate on-prem routes to the Transit VPC. IPSEC tunnels are also built between the two.

insane_datacenter

One deployment layout is described as below.

datacenter_layout

How to configure Insane Mode for Transit VPC?

At Step 1 Transit Network workflow select “Insane Mode Encryption”.

Beta Testing Check List

Deployment topology for Aviatrix CloudN beta testing is as following:

InsaneBeta

Please collect information requested below and provide to Aviatrix. The Beta application form can be downloaded here

CloudN Interface Private IP Address Subnet Mask Default Gateway Primary DNS Server Secondary DNS Server Note
1- WAN     Not Required Not Required Not Required  
2- LAN     Not Required Not Required Not Required  
3- MGMT           Management port for CloudN configuration and software upgrade

Aviatrix will pre-configure the IP addresses, subnet masks, default gateway and DNS servers on CloudN before shipping the unit.