I have set up a working Kubernetes cluster using Rancher which defines two networks:
10.42.0.0/16for pod ip addresses10.43.0.0./16for service endpoints
I want to use my existing Caddy reverse proxy to access those service endpoints, so I defined a route (10.10.10.172 is one of my kubernetes nodes):
sudo route add -net 10.43.0.0 netmask 255.255.0.0 gw 10.10.10.172 My routing table on the Caddy web server:
arturh@web:~$ sudo route [sudo] password for arturh: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default DD-WRT.local 0.0.0.0 UG 0 0 0 eth0 10.10.10.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.43.0.0 rancherkube1.lo 255.255.0.0 UG 0 0 0 eth0 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 Using this setup I can access and use 10.43.0.1:443 without any issues (it is the main kubernetes api endpoint):
arturh@web:~$ nmap 10.43.0.1 -p 443 | grep 443 443/tcp open https arturh@web:~$ curl -k https://10.43.0.1 Unauthorized But accessing any other IP address in the 10.43.0.0/16 network fails and I cannot figure out why:
arturh@web:~$ kubectl get svc | grep prometheus-server prometheus-prometheus-server 10.43.115.122 <none> 80/TCP 1d arturh@web:~$ curl 10.43.115.122 curl: (7) Failed to connect to 10.43.115.122 port 80: No route to host arturh@web:~$ traceroute 10.43.115.122 traceroute to 10.43.115.122 (10.43.115.122), 30 hops max, 60 byte packets 1 rancherkube1.local (10.10.10.172) 0.348 ms 0.341 ms 0.332 ms 2 rancherkube1.local (10.10.10.172) 3060.710 ms !H 3060.722 ms !H 3060.716 ms !H I can access everything from the kubernetes node itself:
[rancher@rancherkube1 ~]$ wget -qO- 10.43.115.122 <!DOCTYPE html> <html lang="en">... which works because of iptable NAT rules:
[rancher@rancherkube1 ~]$ sudo iptables -t nat -L -n | grep 10.43 KUBE-SVC-NGLRF5PTGH2R7LSO tcp -- 0.0.0.0/0 10.43.115.122 /* default/prometheus-prometheus-server:http cluster IP */ tcp dpt:80 KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- 0.0.0.0/0 10.43.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443 I'm confused because the entries for 10.43.0.1 which works looks identical the the others which do not... I figure I need to add an iptables rule to allow access to the 10.43.0.0/16 subnet, but I'm not familiar with iptables.
I'm quite new to the whole kubernetes business, is this the correct way to go about accessing your service endpoints? If so can someone please help me with the correct iptables command?