Creating Inbound NAT Rules to Connect to a Single VM in Azure (Port Forwarding)

Nisha P
15 min readNov 7, 2022

--

Inbound NAT rules is an optional and configurable component for use with the Azure Public load balancer when using it to distribute inbound traffic to a backend pool of compute resources on our virtual networks. We can configure inbound NAT rules that will be used to forward inbound traffic received on the load balancer’s frontend IP address with a specified port so that we can connect to a particular virtual machine (VM) within the load balancer’s backend pool. We refer to this process as “port forwarding” and it can be used to target either a single virtual machine, multiple VMs, or a VM Scale Set in the backend pool of the load balancer on the internal virtual network.

Diagram of Load Balancer and Inbound NAT Rule Port Forwarding | Source: Microsoft

Use Case for Port Forwarding

Port forwarding allows network administrators to use one IP address for all external communications inbound from the Internet while dedicating multiple servers with different IPs and ports to be accessed internally. It can protect servers from unwanted access and is also a great way to preserve public IP addresses. Yes, this can also be accomplished by using a firewall and configuring inbound firewall rules.

How to Enable Port Forwarding in Azure

To enable port forwarding to a single virtual machine on Azure, we must create an Inbound NAT rule on an Azure load balancer where we explicitly define a mapping between a frontend (Internet-facing) port that is requested by a client to a target VM that is a member of the load balancer’s backend pool on the local private virtual network. Essentially, port forwarding maps an external “port” on your internet-facing IP address to a particular device on your local private network.

In a multiple virtual machines and virtual machine scale sets scenario, the entire backend pool is referenced rather than a single targeted VM in the rule and we must pre-allocate a range of frontend ports to create our desired port mappings. This option not only provides us with virtual machine high availability, but also achieves dynamic port mapping creation for any new virtual machines that get added to the backend pool.

Load Balancer using Inbound NAT rules to connect to multiple VMs and scale sets — Source: Microsoft

Outbound Connections for Virtual Machines

When making the decision to place virtual machines behind a load balancer as part of a backend pool, consideration should be given to whether or not the virtual machine should be allowed outbound connectivity and the desired outbound IP address that should be presented to any clients that will establish a connection to our backend resources. It is advisable that you reference Microsoft’s best practices guidelines to ensure that the selected method for outbound connectivity will achieve your needs for any workloads in your Azure environment.

In some scenarios, outbound connectivity to the Internet may be needed for some virtual machines. For example, connectivity may be needed for some VMs to allow for Windows Updates. Azure provides several outbound connectivity methods for virtual machines and some unique scenarios for each method. When we configure the load balancing rule settings during the public load balancer creation process, it is during this step that we select our desired option for Outbound Source Network Translation (SNAT). While we can choose to use the frontend IP address of the load balancer for outbound access, the association of a Network Address Translation (NAT) Gateway to the subnet is the most optimal solution for a production environment for avoiding any issues with port exhaustion. The integration of a NAT Gateway is the option that was deployed in this lab demonstration.

Outbound Connection Options for Azure Virtual Machines | Source: Microsoft

Lab Demonstration

In this lab demonstration, I implemented port forwarding by deploying an Azure Public Load Balancer into a subnet of my virtual network, configured inbound NAT rules to enable port forwarding from the frontend to specific VMs in the load balancer’s backend pool. I also integrated a NAT Gateway to be used for handling outbound traffic originating from the virtual machines using the public IP address of the NAT gateway.

I was able to accomplish this solution by executing the following steps:

  • Created a virtual network and virtual machines
  • Created a standard SKU public load balancer with frontend IP, health probe, backend configuration, load-balancing rule, and inbound NAT rules
  • Created a NAT gateway for outbound internet access for the backend pool
  • Installed and configured a web server on the VMs to demonstrate the port forwarding and load-balancing rules

Create a virtual network and virtual machines

In Step 1, we create a virtual network that will facilitate communication between our subnets and the resources that will reside on them. In this example, I created a resource group during the VM creation process, named NAT-LB-RG. I chose the East US Region for deployment of this virtual machine, isolated it to 1 Availability zone, and named it NAT-LB-VM. I installed an Ubuntu Server for the OS image and I configured SSH public key for the authentication type to provide me with secured remote access to the VM with use of a private key.

Create Virtual Machine

Moving on to the Networking tab, I created the virtual network from this step and crated a VNet named NAT-LB-vnet. I gave the network a CIDR range of 10.1.0.0/16 and configured a Backend subnet with an address range of 10.1.0.0/24. Here we click on OK to accept the virtual network configuration with the virtual machine.

Create Virtual Network for VM
Create a virtual machine — Networking options

A new network security group (NSG) was configured for the VM as referenced in the image below. This inbound security rule was configured with a priority of 300 for Service HTTP. This is to allow inbound HTTP communication to the virtual machine from the Internet since this is disabled for a virtual machine by default. Without this inbound security rule, only the HTTP traffic originating from the health probe would be allowed, by default. This NSG was applied to the network interface of the virtual machine. We can also use the network security group on the virtual machines to further filter traffic so that only specific source IP addresses are allowed inbound HTTP access to the VMs, if needed.

Creation of Network Security Group with Inbound Security Rules for the VM1

In the next step, I created a second virtual machine named NAT-LB-VM-2 and deployed it within the same region as VM 1 (East US) with an Ubuntu Server OS image. I isolated VM2 to Availability Zone 2 and I configured an SSH public key for the authentication type to provide me with secured remote access to the VM with use of a private key.

Create Virtual Machine 2 — Basics

In the following step, Networking configuration, I added this VM2 to the virtual network and on the Backend Subnet where VM1 is already deployed. I associated the existing Network Security Group with the network interface of this VM so that the configured rules will apply to this one as well.

Create Virtual Machine 2 —Networking

Create a Standard SKU Public Load Balancer

In step 2 I completed the creation of the Azure Public Standard Load Balancer and named it NAT-LB. I deployed the load balancer to the East US region where my other resources already reside. Therefore, I went with the Regional Tier option.

Create load balancer — Basics

After clicking on Next to advance to the Frontend IP configuration page, I configured a Zone-Redundant public IP address for the load balancer’s frontend.

Create load balancer — Add Frontend IP configuration
Create load balancer — Frontend IP configuration

After clicking on next to advance to the Backend pools configuration page, we give a name for the backend pool and select the virtual network that this pool will be a member of. We then click on “+ Add” to select the individual resources that will need to be added as members of this backend pool.

Create a load balancer — Add backend pool

On the next screen, I selected both VM1 (NAT-LB-VM) and VM2 (NAT-LB-VM-2) and clicked on “Add” button to add them to the IP configurations for the backend pool.

Add IP Configurations to backend pool

Click on Save to confirm the backend pool configuration for the load balancer.

Add backend pool
Create a load balancer — Save backend pool

After clicking on Next, we advance to the Inbound rules configuration page where we are able to configure any Load Balancing rules as well as Inbound NAT rules. Load-balancing rules are used to specify a pool of backend resources to route traffic to, balancing the load across each instance. Click on “+ Add a load balancing rule” to create a rule to tie the frontend IP configuration to the backend pool of resources.

Create load balancer rules

In the next step, I added a load balancing rule named myHTTPRule, selected IPv4 version, then from the drop-down lists I selected the Frontend IP address (myFrontend) and the Backend pool that we created in the previous step. This particular rule is in reference to a typical load balancing scenario for virtual machine high availability, as was discussed in my previous article. In this load balancing rule configuration, we define how incoming traffic is evenly distributed to all the instances within the backend pool when the load balancer’s frontend IP address is accessed by a client (without explicitly requesting a specific port).

Diagram of Load Balancing Rule Evenly Distributing Traffic to backend pool | Source: Microsoft

We are required to create or select a Health Probe here that will monitor the state of the compute resources in the backend pool to ensure that they are healthy and available to accept traffic. I configured the Health Probe to use TCP port 80 for communication to the backend resources and I set an interval for 5 seconds between each probe.

Since we will enable an Inbound NAT Rule as part of this deployment, I enabled TCP Reset and an Idle Timeout value of 15 seconds to facilitate the release of connections to both client and server endpoints at the time of idle timeout. I kept the default configuration of None for the Session Persistence (stickiness) setting.

For the Outbound source network address translation (SNAT), I kept the default setting of “Use outbound rules to provide backend pool members access to the internet.” With this setting, we can choose to configure explicit static outbound rules separately. Without configuring explicit outbound rules, we would end up with the default outbound IP configuration where the frontend IP address of the load balancer is used for connections originating from the VMs. While I did not configure any outbound rules on the load balancer, we will override the outbound rule connection setting by integrating a Virtual NAT Gateway in a later step. Click on Add to add the load balancing rule to the load balancer.

Create health probe and add to load balancer rule

In the next step, I began the creation an Inbound NAT rule by clicking on the “+ Add an inbound nat rule”.

Load balancing rule created

It is in this Add Inbound NAT rule configuration pane where we will define the port mappings of the Internet-facing ports to the specific virtual machines that we want this to translate to in our backend pool.

Inbound NAT rule diagram

I named the first rule “natrule-VM-221” and selected the targeted virtual machine “NAT-LB-VM” that this rule will be translated to. For network IP configuration, I selected the IP address of VM1. I then selected the Frontend IP Address and the specific port (221) to be used by clients accessing the load balancer’s frontend Internet-facing IP address who want to connect to VM1. I selected a Custom service tag and specified that TCP port 22 will be used to connect to this VM of the backend pool. TCP port 22 is used for SSH connectivity to a device. I kept the default settings for the remaining options and then clicked on “Add” to add this inbound NAT rule to the load balancer.

Add inbound NAT Rule

To add another rule to create a port mapping for VM2, click on “+ Add an inbound nat rule” from this screen.

Load balancer Inbound NAT rule for port 221 forward to VM1 created

On this Add inbound NAT rule pane, I configured the rule for VM2 similar to the rule that I created for VM1. However, I selected VM2 as the target virtual machine here and defined a port mapping of 222 that will need to be requested by the connecting client in order to connect to this VM. Here we click on Add to save the inbound NAT rule to the load balancer’s configuration.

Add inbound NAT rule for frontend port 222

On the Inbound rules page, we can verify here that we have both a load balancing rule configured as well as two Inbound NAT rules configured for ports 221 and 222 to be mapped to VM1 and VM2, respectively.

Since no outbound rules were to be configured for this scenario, I then clicked on Review + create to kick off the deployment of the Public Load Balancer.

Inbound NAT rules created for port 221 (VM1) and port 222 (VM2) created

Once the load balancer completed the deployment process, I navigated to the Frontend I configuration of the device under the settings menu. I verified that the Internet-facing frontend was assigned a public IP address of 52.147.210.208.

Load Balancer Frontend IP Address

Next, I navigated to the Inbound NAT rules page under settings and was able to verify the port mappings that were created between the frontend and the VMs of the backend pool.

Verification of VM Port Mappings

In the images listed below, I inspected the configuration overview of VM1 and VM2 and observed that although I never configured any public IP addresses on these VMs, they assumed the public IP address that is assigned the frontend of the load balancer. This is expected behavior once virtual machines are placed behind a public load balancer as part of a backend pool. Any traffic originating from the VMs at this stage would pass through the load balancer and present the load balancer’s outbound IP address on these connections. This is considered default outbound access in Azure and is not the most optimal solution for outbound connections due to constraints. We will integrate a NAT gateway into the subnet in the next step to provide a more scalable solution.

Public IP address of NAT-LB-VM (VM1)
Public IP address of NAT-LB-VM-2 (VM2)

Create NAT Gateway for outbound internet access for the backend pool

In this step, I created a NAT Gateway for outbound internet access for the backend pool. NAT gateway takes precedence over other outbound scenarios (including Load balancer and instance-level public IP addresses) and replaces the default Internet destination of a subnet. Therefore, outbound connections originating from the virtual machines in the subnet will egress through the NAT Gateway. In this step, I deployed a Network Address Translation (NAT) gateway into the subnet to provide outbound connections for the virtual machines that reside on this subnet.

I created the NAT gateway to the same region as the other resources in this demonstration and an Idle timeout value of 15 minutes.

Create Network Address Translation (NAT) gateway — Basics

Clicking Next to advance to the Outbound IP configuration page, I configured a new Public IP address by clicking on “Create a new public IP address” and I gave it a name of myNATGateway-PIP.

Create Network Address Translation (NAT) gateway —Outbound IP

Clicking on next to advance to the Subnet configuration page, I associated the NAT gateway with the NAT-LB-vnet virtual network and the Backend subnet (myBackendSubnet) that it will provide outbound access for. Click on Review + create to kick off the deployment process of the NAT Gateway here.

Create NAT Gateway —Associate to VNet and Subnet

Once the NAT gateway completed the deployment process, I navigated to its Outbound IP configuration under the setting menu to verify its public IP address. Here, Azure assigned a public IP address of 20.228.167.254.

In the next step, I move on to testing out the deployment to ensure that it works as expected.

NAT Gateway Outbound IP address

Testing the Deployment

For testing this deployment, I will install and configure a web server on the VMs to demonstrate the port forwarding and load-balancing rules.

I began testing the deployment by using PowerShell to SSH to VM1 using the frontend IP address of the load balancer and the port that was mapped to VM1, which was port 221 (eunishap@52.147.210.208 -p 221). Since the VM was created using a public and private key pair, I referenced the private key that I downloaded during the VM deployment process to provide my authentication to connect to the device.

ssh -i "C:\Users\eunis\Downloads\NAT-LB-KEY.pem" eunishap@52.147.210.208 -p 221

The connection was successful and I was presented with the command line prompt of NAT-LB-VM.

Requesting an SSH connection to virtual machine NAT-LB-VM using SSH Key

In the next step, I installed package updates and an NGINX web server on this VM1 by executing the commands listed below.

sudo apt-get -y update
sudo apt-get -y install nginx
Package updates and an NGINX web server installation

I proceeded with these same steps to connect to VM 2. From PowerShell, I initiated an SSH connection to VM2 using the frontend IP address of the load balancer and the port that was mapped to VM2, which was port 222 (eunishap@52.147.210.208 -p 222). Since the VM was created using a public and private key pair, I referenced the private key that I downloaded during the VM deployment process to provide my authentication to connect to the device.

ssh -i "C:\Users\eunis\Downloads\NAT-LB-VM-2_key.pem" eunishap@52.147.210.208 -p 222

The connection was successful and I was presented with the command line prompt of NAT-LB-VM-2.

Requesting SSH connection to virtual machine NAT-LB-VM-2

In the next step, I installed package updates and an NGINX web server on this VM2 by executing the commands listed below.

Once the NGINX installation completed successfully on both virtual machines. I launched my web browser and navigated to the IP address of the load balancer and the default NGINX website is displayed successfully.

The default NGINX website is displayed.

I took things a step further and wanted to verify that the virtual machines would assume the public IP address of the NAT gateway for any outgoing traffic initiated by the virtual machines. I asked the Internet for my Public IP address by executing the command line listed below and observed that the Outbound IP address of the NAT Gateway was returned as the result in both cases (20.228.167.254).

dig +short myip.opendns.com @resolver1.opendns.com
dig command for determining public IP address in Linux
dig command for determining public IP address in Linux

This concludes my lab demonstration on how to configure Inbound NAT Rules to enable Port Forwarding when using Azure Load Balancers. Please stay tuned because the load balancer will return in a future lab and project very soon!

Thank you for reading!

--

--

Nisha P
Nisha P

No responses yet