I have an AKS cluster 1.29.2 with calico network policy. I have an egress network policy that should allow outbound traffic on internet but block all traffic in RFC_1918 range.
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: egress-allow namespace: production spec: egress: - ports: - port: 53 protocol: UDP to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system podSelector: matchLabels: k8s-app: kube-dns - to: - ipBlock: cidr: 0.0.0.0/0 except: - 172.16.0.0/12 - 10.0.0.0/8 - 192.168.0.0/16 podSelector: matchLabels: app.kubernetes.io/name: prodbindms policyTypes: - Egress But when I am trying to connect to a svc ip of another microservice in same cluster in the same RFC_1918 range (nc -vz 172.20.217.17 8080) from prodbindms pod, it's able to connect. As soon as I remove the whole ipBlock, the connection fails but then I cant connect to any public ip as well. Basically all egress except connectivity to kube-dns starts failing.
It looks like the except in ipBlock isnt working at all. I have tried all possibilities but couldnt make this work. The same networkpolicy works fine on AWS EKS cluster. What could be the reason for this not working on Azure AKS.
AKS show details: az aks show: "networkMode": null, "networkPlugin": "kubenet", "networkPluginMode": null, "networkPolicy": "calico",