Azure Networking through the eyes of Cisco engineer

It can be very confusing for an old school network engineer to grasp the concepts of software defined networking that exist in Public Clouds. With this article I will make an attempt to explain Azure’s network building blocks in Cisco’s terms. Because, let’s face it, in the modern world the focus moves from on-prem networking towards Public Clouds and there’s nothing we can do apart from educate ourselves to stay in demand.

Hopefully, after reading this blog post you will be able to digest intermediate-to-advanced level articles or maintain a conversation with cloud team in your company.

Every network engineer should understand the following main concepts when it comes to Azure networking:

  • Virtual Network
    • Address Space
    • Subnet
  • Network Security Group
  • Route Tables, or User Defined Routing
  • Network Interface
    • Public IP address

There are other concepts, such as Virtual Network Gateway, ExpressRoute, Load Balancer, Azure DNS, and so on, but I will leave them for future posts.

Let’s start with a Virtual Network, or simply a VNet.

VNet defines a connectivity domain within an Azure region. That is, VNets cannot span across multiple regions. In Cisco’s terms, VNet is what we know as Virtual Routing/Forwarding, or VRF, and that’s exactly what it is. I would even compare it to VRF-lite to emphasize its limited regional scope – VRF-lite exists within boundaries of a single phsyical or virtual forwarding platform. VNet is similar as it’s contained within single region.

VNet provides means for macro segmentation within Azure cloud. By default, resources connected to the same VNet can communicate with each other, but this behavior can be changed (see below). Resources deployed in different VNets (regardless if it’s the same region or not) cannot communicate by default.

VNet requires an Address Space(s) to be allocated and arranged into Subnets (at least one Subnet should exist within defined Address Space). Address Space is a CIDR block of any supported size.

Think of Subnets as Switching Virtual Interface, or SVI, on a Catalyst platform. More precisely, SVI that belongs to a certain VRF. You won’t have to define IP address on this “virtual interface” to act as a gateway, or DHCP relay destinations as Azure will handle this automatically in background.

Remember, by default, all Subnets within the same VNet support unrestricted connectivity, which is identical to Cisco world – SVI interfaces that belong to a single VRF make up a RIB (routing table) and traffic flows in an unrestricted fashion.

To change this behavior one can leverage Azure’s Network Security Groups, also known as NSGs. This is the easiest concept to grasp as we all love ACLs. NSG is identical to ACL. The key difference is that NSG, once assigned, applies to traffic in both directions. Hence, every NSG has inbound and outbound rules. NSGs are stateful.

NSGs can be assigned to Subnets or Network Interfaces. Best practice is to assign NSGs on a Subnet level to avoid operational nightmare.

By default, NSGs have the following Inbound rules:

  • AllowVnetInBound. This one can be a little bit misleading. Even though its name suggests that scope is limited to a VNet, in reality it’s not. This rule allows connectivity from any known network. This includes onprem networks (if BGP is used to exchange routing information), and any Subnets which belong to remote VNets, if VNet peering is established.
  • AllowAzureLoadBalancerInBound. This rule allows Azure load balancers to talk to resources deployed in a VNet. Specifically, it ensures that LB Health probes will continue to work.
  • DenyAllInBound. Explicit Inbound Deny All

In addition to Inbound rules, every NSG has the following default Outbound rules defined:

  • AllowVnetOutBound. This rule is similar to the one we’ve covered above. It permits outbound traffic to any known subnet, including onprem destinations and peered VNets.
  • AllowInternetOutBound. By default, any resource is allowed to access Internet.
  • DenyAllOutBound. Explicit Outbound Deny All

In certain cases it may be undesirable for virtual machines to access Internet directly. This will be our next topic.

By default, VNet can route traffic among its own subnets, as well as subnets within peered VNets (providing peering configuration allows this). Routes learned via BGP over VPN or ExpressRoute are also injected into VNet’s default routing table. In addition, default route ( points to Azure’s NAT gateway to support out of the box Internet connectivity from the VNet.

This default behavior can be changed with the help of Route Tables, also known as User Defined Routing, or simply UDR. Think of UDR as PBR in Cisco’s world. With the help of UDR it is possible to change default routing behavior of resources within a certain Subnet to which Route Table is attached. The fact you attach Route Table to a Subnet made me think of it as PBR and not static routing. UDR changes routing behavior per-Subnet (remember comparison to SVI?) similar to PBR.

For each prefix in a Route Table next-hop can be defined as one of the following options:

  • Virtual network gateway – Traffic matching this entry will be sent to GatewaySubnet in current VNet. GatewaySubnet is a special Subnet deployed by Azure once at least one Virtual network gateway is deployed (VPN or ExpressRoute). Virtual network gateway will then send this traffic using its routing tables.
  • Virtual network – Use Address Space and known Subnets to send traffic directly. This is default behavior of Azure, you will only use this nexthop type, when you want to override the behavior for a certain destination, such as management IP address of NVA. It is possible to overwrite default behavior by forcing traffic to all Subnets within a VNet to be sent via NVA, but what if management IP address should be excluded from this behavior? Virtual network as next-hop will help to achieve this desired behavior.
  • Internet – Azure will send traffic towards Internet via its NAT Gateway. It can be spelled as via Azure’s NAT gateway.
  • Virtual appliance – Traffic can be sent to network virtual appliance for processing, such as firewall, router or SD-WAN appliance. This option requires to specify appliance’s IP address and it should be known (reachable) from the perspective of a Subnet. This option can be compared to a static route on any Cisco appliance, as it requires a prefix and next-hop’s IP address.
  • None – Matching traffic will be dropped, which is similar to Null0 interface’s behavior on any Cisco platform.

The last concept to understand (but not least) is Network Interface.

As you’ve already probably realized, in Azure everything is an object. Objects are de-coupled to ensure infrastructure can be easily built and maintained as a Code. Therefore Virtual Machines don’t have any IP addresses assigned directly. This is done using Azure’s Network Interface construct. Network Interface is linked to a VM and Subnet. It receives IP address(es) via Azure DHCP. There’s no Static IP addresses as we, Cisco engineers, know it. Everything is managed by DHCP. Static IP address in Azure is just a different name for DHCP reservation.

It is possible to re-assign Network Interface from one Subnet to another, but not possible to move between VNets. If a VM has to be moved from one VNet into another, it will have to be deleted and re-created in the new VNet. This is quite weird restriction from the perspective of SDN concepts.

Single Network Interface can have multiple IP address configurations, each with its own Private IP address, and optionally Public IP address. Public IP address is, surprisingly, another object! You get it, right? Private IP address is part of Network Interface object, but not Public IP address, or PIP.

You have to define PIP separately and assign it to Network Interface to enable inbound connectivity to the network resource. Best practice is to avoid assigning Public IP addresses to virtual machines unless really necessary, and you would still think twice, then think again, before making this final decision. Use other Azure capabilities to support inbound connectivity, such as Azure Firewall, Azure Public Load Balancer, Azure Application Gateway, Azure Front Door, or solutions from 3rd party vendors.

That’s all I wanted to cover in this post. I hope it was helpful. If you have any questions, don’t hesitate to ask. If you’ve noticed any mistakes – let me know and I’ll get them corrected.

Keep in mind the following best practices

  • Azure recommends to use fewer larger VNets than a lot of smaller VNets to reduce the complexity of operation
  • It is not possible to change (add/remove) VNet’s Address Space once VNet peering is established. You will have to delete peering to add additional ranges to VNet. Think downtime!
  • Azure reserves 5 IP addresses for internal use in every Subnet! Therefore Subnets smaller than /29 are not supported. Even CIDR /29 can only provide 3 IP addresses.
  • Don’t blindly apply on-prem subnetting logic
  • Avoid small Address Spaces and Subnets as you may put yourself into an unpleasant situtation. Think of future expansions and need to delete/rebuild/re-deploy certain resources due to dependencies. Again, think downtime!
  • If possible, stick to Class C subnets. I know it doesn’t feel right, but Public Clouds belong to a dark side. It’s a different world.

P.S. Azure is known to change at fast pace. Features can change, or get re-designed completely. Information presented above was valid at the time of writing (June 2020), but I cannot guarantee if it stays accurate for a long time. I do, however, believe it should still be helpful to fill conceptual gaps.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.