Cant connect to AKS internal ingress on private v-net from peered v-net

2020-02-06 nginx kubernetes-ingress azure-virtual-network azure-aks

We have an original network with an number of linux VM's with docker swarm containers. We're migrating them to AKS but this is have to be done in steps. Currently we have setup up an ingress on a private subnet with an internal ingress below as below, use the following example https://docs.microsoft.com/en-us/azure/aks/ingress-internal-ip

The issue:- when connected to the original v-net We can't connect to 10.3.0.4 on the 2 subnet, i.e. 10.0.0.12:80 -> 10.3.0.4:80. On a test VM on the same subnet an the AKS cluster We can connect to the 10.3.0.4 just fine.

The strange things we can connect from the original v-net to a service end point in the nic , i.e 10.0.0.12:80 -> 10.3.3.134:80/.

This is not a solution though as we could have multi replicas.

Any ideas why the loadbalancer is not visible to the original v-net?

Connected Devices in V-Net

original subnet
//10.0.0.0/24
10.0.0.0 - 10.0.0.255 (65536 addresses)
//10.0.1.0/24
10.0.1.0 - 10.0.1.255 (65536 addresses)

which have peering set to each other

aks subnet
10.3.0.0/16
10.3.0.0 - 10.3.255.255 (65536 addresses)
az aks create \
    --resource-group AKS_Workshop \
    --name workshop-aks-cluster \
    --network-plugin azure \
    --vnet-subnet-id "/subscriptions/../resourceGroups/AKS_Workshop/providers/Microsoft.Network/virtualNetworks/aksvnet/subnets/default" \
    --docker-bridge-address 172.17.0.1/16 \
    --dns-service-ip 10.4.0.2 \
    --service-cidr 10.4.0.0/16 \
    --generate-ssh-keys \
    --max-pods 96 \
    --node-vm-size Standard_DS2_v2
controller:
  service:
    loadBalancerIP: 10.3.0.4
    annotations:
      service.beta.kubernetes.io/azure-load-balancer-internal: "true"
helm install stable/nginx-ingress /
  --name backend-ingress /
  --namespace ingress -f ingress-basic.yaml 
  --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux 
  --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux
NAME                                            TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE
aks-helloworld                                  ClusterIP      10.4.121.75   <none>        80/TCP                       25h
backend-ingress-nginx-ingress-controller        LoadBalancer   10.4.233.18   10.3.0.4      80:30904/TCP,443:32023/TCP   23h
backend-ingress-nginx-ingress-default-backend   ClusterIP      10.4.198.93   <none>        80/TCP                       23h
ingress-demo                                    ClusterIP      10.4.53.2     <none>        80/TCP                       25h
Name:              ingress-demo
Namespace:         ingress
Labels:            <none>
Annotations:       <none>
Selector:          app=acs-helloworld-saucy-antelope
Type:              ClusterIP
IP:                10.4.53.2
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.3.3.134:80
Session Affinity:  None
Events:            <none>

Answers

Turns out the issue is relate to the 2 v-net being in different regions. Once I created a v-net in the same region I had no issues hitting the internal load balancer endpoint, as it says you can get to the internal endpoint, but if you set a number of replicas for your balancer you can only hit one of the endpoints thus rendering it pointless

VNet peering

What is VNet peering?

VNet peering (or virtual network peering) enables you to connect virtual networks. A VNet peering connection between virtual networks enables you to route traffic between them privately through IPv4 addresses. Virtual machines in the peered VNets can communicate with each other as if they are within the same network. These virtual networks can be in the same region or in different regions (also known as Global VNet Peering). VNet peering connections can also be created across Azure subscriptions.

Can I create a peering connection to a VNet in a different region?

Yes. Global VNet peering enables you to peer VNets in different regions. Global VNet peering is available in all Azure public regions, China cloud regions, and Government cloud regions. You cannot globally peer from Azure public regions to national cloud regions.

What are the constraints related to Global VNet Peering and Load Balancers?

If the two virtual networks in two different regions are peered over Global VNet Peering, you cannot connect to resources that are behind a Basic Load Balancer through the Front End IP of the Load Balancer. This restriction does not exist for a Standard Load Balancer. The following resources can use Basic Load Balancers which means you cannot reach them through the Load Balancer's Front End IP over Global VNet Peering. You can however use Global VNet peering to reach the resources directly through their private VNet IPs, if permitted.

VMs behind Basic Load Balancers Virtual machine scale sets with Basic Load Balancers Redis Cache Application Gateway (v1) SKU Service Fabric SQL MI API Management Active Directory Domain Service (ADDS) Logic Apps HDInsight Azure Batch App Service Environment

You can connect to these resources via ExpressRoute or VNet-to-VNet through VNet Gateways.


Related