Securely Federating Cloud Services with VNS3

by | 30 Jul 2020

Service Endpoints are a great concept. They allow you to access things like s3 buckets in AWS from within your VPC without sending traffic outside of it. In fact, from a compliance perspective they are essential. Both Amazon Web Services and Microsoft Azure have them. One drawback in AWS is that they can only be accessed from the VPC in which they have been set up. But what if you wanted to access that s3 bucket securely from another region or from an Azure VNET? Perhaps you have an Azure SQL Data Warehouse that you want to access from your application running in AWS. Service Endpoints have their limitations. For many companies that are developing a multi cloud, multi region strategy, it’s not clear how to take advantage of this service. We at Cohesive Networks have developed a method that allows you to access these endpoints from across accounts, regions and across cloud providers. This blog post will discuss in detail how we achieve these ends.

Using AWS Private Service Endpoints

In order to interact with AWS Service Endpoints you need a few things. You need DNS resolution, which needs to occur from inside our VPC, and you need network extent, or the ability to to get to whatever address your DNS resolves to. Both of these two conditions are easy to achieve from any VPC or VNET using VNS3 configured in what we call a Federated Network Topology and utilizing the VNS3 docker based plugin system to run bind9 DNS forwarders. Let’s start by taking a look at the VNS3 Federated Network Topology.

VNS3 Federated Architecture Diagram

The core components that make up the Federated Network Topology are a Transit Network made up of VNS3 controllers configured in a peered topology and the individual Node VNS3 controllers running in the VPCs and VNETs. All controllers are assigned a unique ASN at the point of instantiation. ASN or Autonomous System Numbers are part of what allow BGP networks to operate. We configure the Node controllers to connect into the Transit Network via route based IPSec connections. By using route based VPNs we can then configure each VNS3 controller to advertise the network range of the VPC or VNET it is in that it wants other networks to be able to get to. This route advertisement gets tied to its ASN which is how other VNS3 controllers know how to get to its network. This gives us the network extent that we need so that even in a complex network comprised of tens to hundreds to thousands of virtual networks spread across accounts, regions and cloud providers we have have a manageable network with minimal complexity. 

In order to interact with a AWS Service Endpoints you need a few things. You need DNS resolution, which needs to occur from inside our VPC, and you need network extent, or the ability to to get to whatever address DNS resolves to. Both of these two conditions are easy to achieve from any VPC or VNET using VNS3 configured in what we call a Federated Network Topology and utilizing the VNS3 docker based plugin system to run bind9 DNS forwarders. Let’s start by taking a look at the VNS3 Federated Network Topology.

VNS3 Transit Diagram explainer

Setting up DNS Resolution

The next component we need is a system to return the correct localized DNS response. AWS uses what is called split horizon DNS, where you will get a different response based on where you are making the request from. That is to say, if you make the DNS call from outside you will get the public facing IP address versus if you make the call from inside you will get the private IP address. Say we are in an Azure VNET in US West and need to access an s3 bucket in us-east-1. You would install the AWS command line tools (CLI) on either your linux or windows virtual machine and run something like:

aws s3 ls s3://my-bucket-name

to get a current listing of objects in your s3 bucket. But you want this interaction to be routed across your secure encrypted network. How would you get the DNS to resolve to the private entry point inside of your sealed VPC in AWS us-east-1 rather than the AWS public gateway? The answer lies in DNS.

VNS3 has an extensible plugin system based on the docker subsystem. You can install any bit of linux software that you want into a container and route traffic to it via its comprehensive firewall. So here we can install Bind 9, the open source full-featured DNS system, into the plugin system. We can configure Bind 9 to act as a forwarder filtered on a certain dns naming patterns. In this case we would be looking for the patterns of s3.amazonaws.com and s3-1-w.amazonaws.com which we will configure to forward on to the plugin address of the VNS3 transit controller running in AWS us-east-1 that is configured to forward all of it’s incoming requests down to the AWS supplied DNS at x.x.x.2 of of it’s VPC. This will return the correct IP of the s3 Service Endpoint that has been configured in the transit VPC in AWS us-east-1. So when the AWS CLI makes the call for “s3://my-bucket-name” the first action that takes place is a DNS call of: “What is the IP to interact with the service for?”, next it will attempt to connect to that IP, which we have made possible as we have created the network extent to that address. From there you can do all the things that you need to do in regards to the bucket. There are some other configuration items that need to be put in place as well. You would need to either configure your whole VNET subnet that the virtual machine resides in or the individual network interface of your VM to point to the private IP of your VNS3 controller as it’s DNS source. The firewall of VNS3 will need to be configured to send incoming TCP and UDP port 53 traffic to the container running Bind 9. And you will need to setup routing rules for your subnet or network interface to point to the VNS3 controller for either the explicit addresses of Service Endpoints in AWS or send all traffic through the VNS3 controller. The latter has extra benefits as VNS3 can act as your ingress/egress firewall, NAT and traffic inspector by adding further plugins like WAF and NIDS.

Conclusion

The above is an illustration of one use case, accessing an AWS s3 bucket from Azure across an encrypted network with full visibility and attestability. Other possibilities include the entirety of Service Endpoints offered by Azure and AWS and the mechanics are the same whether AWS to Azure, vise versa, or across regions inside a single cloud provider. The take away is that VNS3 has powerful capabilities that allow you to create secure extensible networks with reduced complexity and inject network functions directly into them that allow you to take advantage of cloud provider services in an agnostic way.