Networking in Swarm and Kubernetes for Docker Enterprise
Instructions: https://github.com/GuillaumeMorini/docker -networking-workshop Environment: https://dockr.ly/dceu- workshop Workshop Materials
Senior Consultant, Hopla Software @frjaraur Javier Ramirez Solution Engineer, Docker @GuillaumeMorini Guillaume Morini Solution Architect, Docker Remy Clement-Hausman
Senior Consultant, Hopla Software @frjaraur Javier Ramirez Solution Engineer, Docker @GuillaumeMorini Guillaume Morini Solution Architect, Docker Yilei Yao
Docker Enterprise is designed to support multiple orchestrators: ○ Swarm - CNM libnetwork plugins ○ Kubernetes - CNI Calico plugins Docker Enterprise Networking Secure Cluster Management App Scheduler Swarm KubernetesOR Docker EE Cluster Docker Enterprise Orchestration Node Node Node
Docker Swarm Networking Overview
Docker Swarm Networking Goals Make multi-host networking simple Make networks first class citizens in a Docker environment Make applications more portable Make networks secure and scalable Create a pluggable network stack Support multiple OS platforms
Docker Swarm Networking Design Philosophy Developers and Operations Batteries included but removable Put Users First Plugin API Design
Libnetwork Architecture Libnetwork (CNM) Docker Engine Native Network Driver Native IPAM Driver Remote Network Driver Remote IPAM Driver Load Balancing Service Discovery Network Control Plane
Single-host networking! • Simple to configure and troubleshoot • Useful for basic test and dev What is Docker Bridge Networking Docker host Bridge Cntnr1 Cntnr2 Cntnr1
• The bridge driver creates a bridge (virtual switch) on a single Docker host • Containers get plumbed into this bridge • All containers on this bridge can communicate • The bridge is a private network restricted to a single Docker host What is Docker Bridge Networking Docker host Bridge Cntnr1 Cntnr2 Cntnr1
What is Docker Bridge Networking Docker host 1 Bridge CntnrA CntnrB Docker host 2 Bridge CntnrC CntnrD Docker host 3 Bridge 1 CntnrE CntnrF Bridge 2 Containers on different bridge networks cannot communicate
• The bridge created by the bridge driver for the pre-built bridge network is called docker0 • Each container is connected to a bridge network via a veth pair • Provides single-host networking • External access requires port mapping Bridge networking in a bit more detail Docker host Cntnr1 Cntnr2 Cntnr1 veth Bridge veth veth eth0
What is Service Discovery? The ability to discover services within a Swarm Every service registers its name with the Swarm Every task registers its name with the Swarm Service discovery uses the DNS resolver embedded inside each container and the DNS server inside of each Docker Engine Clients can lookup service names
Bridge networking and port mapping Docker host 1 Cntnr1 10.0.0.8 Bridge L2/L3 physical network 172.14.3.55 $ docker run -p 8080:80 ... Host port Container port :80 :8080
• Creates a private internal network (single-host) • External access is via port mappings on a host interface • There is a default bridge network called bridge • Can create user-defined bridge networks Bridge Networking Summary Docker host Cntnr1 Cntnr2 Cntnr1 veth Bridge “docker0” veth veth eth0
The overlay driver enables simple and secure multi-host networking What is Docker Overlay Networking? Docker host 1 CntnrA CntnrB All containers on the overlay network can communicate! Docker host 2 CntnrC CntnrD Docker host 3 CntnrE CntnrF Overlay Network
Docker host 2Docker host 1 Building an Overlay Network (High level) 172.31.1.5 Overlay 10.0.0.0/24 192.168.1.25 10.0.0.410.0.0.3
• The overlay driver uses VXLAN technology to build the network • A VXLAN tunnel is created through the underlay network(s) • At each end of the tunnel is a VXLAN tunnel end point (VTEP) • The VTEP performs encapsulation and de- encapsulation • The VTEP exists in the Docker Host’s network namespace Docker Overlay Networks and VXLAN Docker host 2Docker host 1 172.31.1.5 192.168.1.25 VTEPVTEP VXLAN Tunnel Layer 3 transport (underlay networks)
Service Discovery in a bit more detail “mynet” network (overlay, MACVLAN, user-defined bridge) Docker host 1 task1.myservice task2.myservice Docker host 2 task3.myservice task1.myservice 10.0.1.19 task2.myservice 10.0.1.20 task3.myservice 10.0.1.21 myservice 10.0.1.18 Swarm DNS (service discovery)
Service Discovery in a bit more detail task1.yourservice 192.168.56.51 yourservice 192.168.56.50 task1.myservice 10.0.1.19 task2.myservice 10.0.1.20 task3.myservice 10.0.1.21 myservice 10.0.1.18 Swarm DNS (service discovery) “mynet” network (overlay, MACVLAN, user-defined bridge) Docker host 1 task1.myservice task2.myservice DNS resolver 127.0.0.11 DNS resolver 127.0.0.11 Engine DNS server Docker host 2 task3.myservice DNS resolver 127.0.0.11 DNS resolver 127.0.0.11 Engine DNS server task1.yourservice “yournet” network
• Every service gets a VIP when it’s created − This stays with the service for its entire life • Lookups against the VIP get load-balanced across all healthy tasks in the service • Behind the scenes it uses Linux kernel IPVS to perform transport layer load balancing • docker inspect <service> (shows the service VIP) Service Virtual IP Load Balancing NAME HEALTHY IP Myservice 10.0.1.18 task1.myservice Y 10.0.1.19 task2.myservice Y 10.0.1.20 task3.myservice Y 10.0.1.21 task4.myservice Y 10.0.1.22 task5.myservice Y 10.0.1.23 Service VIP Load balance group
What is a Routing Mesh? Native load balancing of requests coming from an external source Services get published on a single port across the entire Swarm A special overlay network called “Ingress” is used to forward the requests to a task in the service Traffic is internally load balanced as per normal service VIP load balancing Incoming traffic to the published port can be handled by all Swarm nodes
Routing Mesh Example 1. Three Docker hosts 2. New service with 2 tasks 3. Connected to the mynet overlay network 4. Service published on port 8080 swarm-wide 5. External LB sends request to Docker host 3 on port 8080 6. Routing mesh forwards the request to a healthy task using the ingress network Docker host 2 task2.myservice Docker host 1 task1.myservice Docker host 3 IPVS IPVS IPVS Ingress network 8080 8080 “mynet” overlay network LB 8080
Routing Mesh Example 1. Three Docker hosts 2. New service with 2 tasks 3. Connected to the mynet overlay network 4. Service published on port 8080 swarm-wide 5. External LB sends request to Docker host 3 on port 8080 6. Routing mesh forwards the request to a healthy task using the ingress network Docker host 2 task2.myservice Docker host 1 task1.myservice Docker host 3 IPVS IPVS IPVS Ingress network 8080 8080 “mynet” overlay network LB 8080
Instructions: https://github.com/GuillaumeMorini/docker -networking-workshop Environment: https://dockr.ly/dceu- workshop Workshop Materials
Kubernetes Networking Overview
Overview • Kubernetes Networking Concepts • Calico CNI Model and Calico CNI Plug-in • Services, Service Discovery, Ingress, Network Policies • Docker Enterprise Networking Architecture with Calico • Network Deployment Models
Kubernetes Networking Concepts • All containers communicate with all other containers without NAT (Network Address Traversal) • All nodes can communicate with all containers and vice-versa without NAT • The IP that the container uses is the same IP that others see it as
Container Network Interface (CNI) CNI Plugin Kubernetes IPAM Network Namespace Pod veth CNI Plugin Library N e t w o r k
Calico Networking Plugin Project Calico is an open source container networking provider and network policy engine and implements the CNI Interface • IP address management • container (pod) networking — Linux kernel L3 routing • inter-node — IP routing Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet. Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking. Calico also provides fine-grained, intent based network security policy for Kubernetes pods via its distributed firewall.
Secure Networking with Project Calico Built-in But Swappable • Pre-integrated with Project Calico: − Highly scalable distributed networking model integrates well with various infrastructure platforms (inc. cloud and on-prem) − Integration with Kubernetes Network Policies • “Batteries included, but swappable”: CNI plug-in is swappable for other solutions • Get a highly scalable networking solution out-of-the-box with the option to swap with your preferred solution • Define networking policies once and apply them consistently across different infrastructure platforms KEY BENEFITS FEATURE / CAPABILITY NetworkPolicy default-deny ingress
Services • Services enable you to expose a single address backed by multiple pods • kube-proxy runs on each node, and performs basic load balancing between the pods
Service Discovery • Service IPs are advertised to other pods via DNS • Kube includes an internal DNS server, kubedns • Kubernetes creates a DNS record for: − every Service (including the DNS server itself) My-svc.my-namespace.svc.cluster.local − Pods (where configured) pod-ip-address.my-namespace.pod.cluster.local
Ingress Ingress Pod Pod Pod Pod Pod Pod Pod Pod Pod Service Service Service foo.mydomain.com mydomain.com/bar Other Traffic • Kubernetes Ingress API allows you to configure provisioning of a dedicated load balancer • Allows use of more advanced LB algorithms than kube-proxy • Ingress controller runs as a pod, specific to each LB • may itself be a software load balancer (e.g. NGINX) • or configuration gateway for external LB appliance
Ingress Example apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /testpath backend: serviceName: test servicePort: 80
Network Policy Stage/tier separation Tenant/namespace isolation ComplianceMicro- segmentation
Kubernetes Network Policy ● Specifies how groups of pods are allowed to communicate with each other and other network endpoints using: ○ Pod label selector ○ Namespace label selector ○ Protocol + Ports ● Pods selected by a NetworkPolicy: ○ Are isolated by default ○ Are allowed incoming traffic if that traffic matches a NetworkPolicy ingress rule. ● Requires a controller to implement the API - Calico does this in Docker Enterprise: https://kubernetes.io/docs/concepts/services-networking/network-policies/
Example Network Policy apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: my-network-policy namespace: my-namespace spec: podSelector: matchLabels: role: db ingress: - from: - podSelector: matchLabels: role: frontend ports: - protocol: TCP port: 6379
Calico Architecture ● Distributed control plane, calico/node on each host ● Shared state in etcd key/value store ● Overlay (IP-IP) or unencapsulated (flat networking) — IP-IP default in Enterprise ● BGP for route distribution (optionally peer with infrastructure) ● Network policy enforced on container and host interfaces
Calico in Docker Enterprise UCP Manager/Linux UCP Linux worker calico cni pods kubedns kube-proxy kubelet kube- controller- manager kube-manager kube-scheduler calico cni pods kube-proxy kubelet
Overlay Data Plane with Full Mesh Control Plane (Default) Calico Deployment Models UCP Worker UCP Worker UCP Worker UCP Worker pod pod IPIP Tunnel BGP Peering Mesh This is the default mode for Kubernetes networking in UCP. It is the most portable and hands-off deployment model. Each Calico router is peered via BGP with every other Calico router. This can have scaling limitations at greater than 100 nodes within a single cluster (depending on node resourcing). The overlay encapsulates pod traffic using the IPIP tunnel using separate subnets for the overlay traffic and the underlay host-to-host traffic. Example: Pod Network - 192.168.10.0/16 Host Network - 192.168.20.0/16
Calico Route Reflector Calico Route Reflector Overlay Data Plane with Route Reflector Control Plane Calico Deployment Models UCP Worker UCP Worker UCP Worker UCP Worker pod pod Less overall BGP peers makes this deployment model more performant and scalable by centralizing the BGP peer connections on a pair of redundant Route Reflectors. Requires a redeployment of Calico with RRs. The RRs must be placed on dedicated nodes with no other workloads. The RRs are highly available by default. UCP Worker UCP Worker IPIP Tunnel
Kubernetes Network Encryption Use Case ● Apply default encryption without intervention or awareness from users ● Protect internal application traffic on untrusted or shared infrastructure by default Usage ● Optional feature in UCP ● Deploy encryption daemonset to encrypt all host-to- host traffic between all pods within the Kubernetes cluster ● Key management and rotation managed centrally by add-on encryption module ● IPSec encryption Host Pod app Host Pod app Kubernetes Networking
Kubernetes Network Encryption UCP Worker Pod app UCP Worker Pod app DS Secure Overlay Agent DS Secure Overlay Agent UCP Manager Pod Secure Overlay Manager Architecture ● Secure Overlay Manager manages and rotates keys within the cluster ● Secure Overlay Agent manages the encryption tunnels between hosts in the cluster. Does not sit in the traffic path. ● Traffic is encrypted by the in-kernel IPSec capabilities of Linux Kubernetes Networking
Review ● Container Networking — Simple, IP-based. Overlay optional. Calico is shipped out of the box with Docker Enterprise ● Services — Cluster-internal load balancing. Implemented by kube-proxy ● Service Discovery — KubeDNS ● Ingress — Routing of external traffic into cluster (e.g. with NGINX) ● Network Policy — Traffic isolation within cluster. Implemented by Calico in Docker Enterprise
Thank you

DockerCon EU 2018 Workshop: Container Networking for Swarm and Kubernetes in Docker Enterprise

  • 1.
    Networking in Swarm andKubernetes for Docker Enterprise
  • 2.
  • 3.
    Senior Consultant, HoplaSoftware @frjaraur Javier Ramirez Solution Engineer, Docker @GuillaumeMorini Guillaume Morini Solution Architect, Docker Remy Clement-Hausman
  • 4.
    Senior Consultant, HoplaSoftware @frjaraur Javier Ramirez Solution Engineer, Docker @GuillaumeMorini Guillaume Morini Solution Architect, Docker Yilei Yao
  • 5.
    Docker Enterprise isdesigned to support multiple orchestrators: ○ Swarm - CNM libnetwork plugins ○ Kubernetes - CNI Calico plugins Docker Enterprise Networking Secure Cluster Management App Scheduler Swarm KubernetesOR Docker EE Cluster Docker Enterprise Orchestration Node Node Node
  • 6.
  • 7.
    Docker Swarm NetworkingGoals Make multi-host networking simple Make networks first class citizens in a Docker environment Make applications more portable Make networks secure and scalable Create a pluggable network stack Support multiple OS platforms
  • 8.
    Docker Swarm NetworkingDesign Philosophy Developers and Operations Batteries included but removable Put Users First Plugin API Design
  • 9.
    Libnetwork Architecture Libnetwork (CNM) DockerEngine Native Network Driver Native IPAM Driver Remote Network Driver Remote IPAM Driver Load Balancing Service Discovery Network Control Plane
  • 10.
    Single-host networking! • Simpleto configure and troubleshoot • Useful for basic test and dev What is Docker Bridge Networking Docker host Bridge Cntnr1 Cntnr2 Cntnr1
  • 11.
    • The bridgedriver creates a bridge (virtual switch) on a single Docker host • Containers get plumbed into this bridge • All containers on this bridge can communicate • The bridge is a private network restricted to a single Docker host What is Docker Bridge Networking Docker host Bridge Cntnr1 Cntnr2 Cntnr1
  • 12.
    What is DockerBridge Networking Docker host 1 Bridge CntnrA CntnrB Docker host 2 Bridge CntnrC CntnrD Docker host 3 Bridge 1 CntnrE CntnrF Bridge 2 Containers on different bridge networks cannot communicate
  • 13.
    • The bridgecreated by the bridge driver for the pre-built bridge network is called docker0 • Each container is connected to a bridge network via a veth pair • Provides single-host networking • External access requires port mapping Bridge networking in a bit more detail Docker host Cntnr1 Cntnr2 Cntnr1 veth Bridge veth veth eth0
  • 14.
    What is ServiceDiscovery? The ability to discover services within a Swarm Every service registers its name with the Swarm Every task registers its name with the Swarm Service discovery uses the DNS resolver embedded inside each container and the DNS server inside of each Docker Engine Clients can lookup service names
  • 15.
    Bridge networking andport mapping Docker host 1 Cntnr1 10.0.0.8 Bridge L2/L3 physical network 172.14.3.55 $ docker run -p 8080:80 ... Host port Container port :80 :8080
  • 16.
    • Creates aprivate internal network (single-host) • External access is via port mappings on a host interface • There is a default bridge network called bridge • Can create user-defined bridge networks Bridge Networking Summary Docker host Cntnr1 Cntnr2 Cntnr1 veth Bridge “docker0” veth veth eth0
  • 17.
    The overlay driverenables simple and secure multi-host networking What is Docker Overlay Networking? Docker host 1 CntnrA CntnrB All containers on the overlay network can communicate! Docker host 2 CntnrC CntnrD Docker host 3 CntnrE CntnrF Overlay Network
  • 18.
    Docker host 2Dockerhost 1 Building an Overlay Network (High level) 172.31.1.5 Overlay 10.0.0.0/24 192.168.1.25 10.0.0.410.0.0.3
  • 19.
    • The overlaydriver uses VXLAN technology to build the network • A VXLAN tunnel is created through the underlay network(s) • At each end of the tunnel is a VXLAN tunnel end point (VTEP) • The VTEP performs encapsulation and de- encapsulation • The VTEP exists in the Docker Host’s network namespace Docker Overlay Networks and VXLAN Docker host 2Docker host 1 172.31.1.5 192.168.1.25 VTEPVTEP VXLAN Tunnel Layer 3 transport (underlay networks)
  • 20.
    Service Discovery ina bit more detail “mynet” network (overlay, MACVLAN, user-defined bridge) Docker host 1 task1.myservice task2.myservice Docker host 2 task3.myservice task1.myservice 10.0.1.19 task2.myservice 10.0.1.20 task3.myservice 10.0.1.21 myservice 10.0.1.18 Swarm DNS (service discovery)
  • 21.
    Service Discovery ina bit more detail task1.yourservice 192.168.56.51 yourservice 192.168.56.50 task1.myservice 10.0.1.19 task2.myservice 10.0.1.20 task3.myservice 10.0.1.21 myservice 10.0.1.18 Swarm DNS (service discovery) “mynet” network (overlay, MACVLAN, user-defined bridge) Docker host 1 task1.myservice task2.myservice DNS resolver 127.0.0.11 DNS resolver 127.0.0.11 Engine DNS server Docker host 2 task3.myservice DNS resolver 127.0.0.11 DNS resolver 127.0.0.11 Engine DNS server task1.yourservice “yournet” network
  • 22.
    • Every servicegets a VIP when it’s created − This stays with the service for its entire life • Lookups against the VIP get load-balanced across all healthy tasks in the service • Behind the scenes it uses Linux kernel IPVS to perform transport layer load balancing • docker inspect <service> (shows the service VIP) Service Virtual IP Load Balancing NAME HEALTHY IP Myservice 10.0.1.18 task1.myservice Y 10.0.1.19 task2.myservice Y 10.0.1.20 task3.myservice Y 10.0.1.21 task4.myservice Y 10.0.1.22 task5.myservice Y 10.0.1.23 Service VIP Load balance group
  • 23.
    What is aRouting Mesh? Native load balancing of requests coming from an external source Services get published on a single port across the entire Swarm A special overlay network called “Ingress” is used to forward the requests to a task in the service Traffic is internally load balanced as per normal service VIP load balancing Incoming traffic to the published port can be handled by all Swarm nodes
  • 24.
    Routing Mesh Example 1.Three Docker hosts 2. New service with 2 tasks 3. Connected to the mynet overlay network 4. Service published on port 8080 swarm-wide 5. External LB sends request to Docker host 3 on port 8080 6. Routing mesh forwards the request to a healthy task using the ingress network Docker host 2 task2.myservice Docker host 1 task1.myservice Docker host 3 IPVS IPVS IPVS Ingress network 8080 8080 “mynet” overlay network LB 8080
  • 25.
    Routing Mesh Example 1.Three Docker hosts 2. New service with 2 tasks 3. Connected to the mynet overlay network 4. Service published on port 8080 swarm-wide 5. External LB sends request to Docker host 3 on port 8080 6. Routing mesh forwards the request to a healthy task using the ingress network Docker host 2 task2.myservice Docker host 1 task1.myservice Docker host 3 IPVS IPVS IPVS Ingress network 8080 8080 “mynet” overlay network LB 8080
  • 26.
  • 27.
  • 28.
    Overview • Kubernetes NetworkingConcepts • Calico CNI Model and Calico CNI Plug-in • Services, Service Discovery, Ingress, Network Policies • Docker Enterprise Networking Architecture with Calico • Network Deployment Models
  • 29.
    Kubernetes Networking Concepts •All containers communicate with all other containers without NAT (Network Address Traversal) • All nodes can communicate with all containers and vice-versa without NAT • The IP that the container uses is the same IP that others see it as
  • 30.
    Container Network Interface(CNI) CNI Plugin Kubernetes IPAM Network Namespace Pod veth CNI Plugin Library N e t w o r k
  • 31.
    Calico Networking Plugin ProjectCalico is an open source container networking provider and network policy engine and implements the CNI Interface • IP address management • container (pod) networking — Linux kernel L3 routing • inter-node — IP routing Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet. Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking. Calico also provides fine-grained, intent based network security policy for Kubernetes pods via its distributed firewall.
  • 32.
    Secure Networking withProject Calico Built-in But Swappable • Pre-integrated with Project Calico: − Highly scalable distributed networking model integrates well with various infrastructure platforms (inc. cloud and on-prem) − Integration with Kubernetes Network Policies • “Batteries included, but swappable”: CNI plug-in is swappable for other solutions • Get a highly scalable networking solution out-of-the-box with the option to swap with your preferred solution • Define networking policies once and apply them consistently across different infrastructure platforms KEY BENEFITS FEATURE / CAPABILITY NetworkPolicy default-deny ingress
  • 33.
    Services • Services enableyou to expose a single address backed by multiple pods • kube-proxy runs on each node, and performs basic load balancing between the pods
  • 34.
    Service Discovery • ServiceIPs are advertised to other pods via DNS • Kube includes an internal DNS server, kubedns • Kubernetes creates a DNS record for: − every Service (including the DNS server itself) My-svc.my-namespace.svc.cluster.local − Pods (where configured) pod-ip-address.my-namespace.pod.cluster.local
  • 35.
    Ingress Ingress Pod Pod PodPod Pod Pod Pod Pod Pod Service Service Service foo.mydomain.com mydomain.com/bar Other Traffic • Kubernetes Ingress API allows you to configure provisioning of a dedicated load balancer • Allows use of more advanced LB algorithms than kube-proxy • Ingress controller runs as a pod, specific to each LB • may itself be a software load balancer (e.g. NGINX) • or configuration gateway for external LB appliance
  • 36.
    Ingress Example apiVersion: extensions/v1beta1 kind:Ingress metadata: name: test-ingress annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /testpath backend: serviceName: test servicePort: 80
  • 37.
  • 38.
    Kubernetes Network Policy ●Specifies how groups of pods are allowed to communicate with each other and other network endpoints using: ○ Pod label selector ○ Namespace label selector ○ Protocol + Ports ● Pods selected by a NetworkPolicy: ○ Are isolated by default ○ Are allowed incoming traffic if that traffic matches a NetworkPolicy ingress rule. ● Requires a controller to implement the API - Calico does this in Docker Enterprise: https://kubernetes.io/docs/concepts/services-networking/network-policies/
  • 39.
    Example Network Policy apiVersion:networking.k8s.io/v1 kind: NetworkPolicy metadata: name: my-network-policy namespace: my-namespace spec: podSelector: matchLabels: role: db ingress: - from: - podSelector: matchLabels: role: frontend ports: - protocol: TCP port: 6379
  • 40.
    Calico Architecture ● Distributedcontrol plane, calico/node on each host ● Shared state in etcd key/value store ● Overlay (IP-IP) or unencapsulated (flat networking) — IP-IP default in Enterprise ● BGP for route distribution (optionally peer with infrastructure) ● Network policy enforced on container and host interfaces
  • 41.
    Calico in DockerEnterprise UCP Manager/Linux UCP Linux worker calico cni pods kubedns kube-proxy kubelet kube- controller- manager kube-manager kube-scheduler calico cni pods kube-proxy kubelet
  • 42.
    Overlay Data Planewith Full Mesh Control Plane (Default) Calico Deployment Models UCP Worker UCP Worker UCP Worker UCP Worker pod pod IPIP Tunnel BGP Peering Mesh This is the default mode for Kubernetes networking in UCP. It is the most portable and hands-off deployment model. Each Calico router is peered via BGP with every other Calico router. This can have scaling limitations at greater than 100 nodes within a single cluster (depending on node resourcing). The overlay encapsulates pod traffic using the IPIP tunnel using separate subnets for the overlay traffic and the underlay host-to-host traffic. Example: Pod Network - 192.168.10.0/16 Host Network - 192.168.20.0/16
  • 43.
    Calico Route Reflector Calico Route Reflector OverlayData Plane with Route Reflector Control Plane Calico Deployment Models UCP Worker UCP Worker UCP Worker UCP Worker pod pod Less overall BGP peers makes this deployment model more performant and scalable by centralizing the BGP peer connections on a pair of redundant Route Reflectors. Requires a redeployment of Calico with RRs. The RRs must be placed on dedicated nodes with no other workloads. The RRs are highly available by default. UCP Worker UCP Worker IPIP Tunnel
  • 44.
    Kubernetes Network Encryption UseCase ● Apply default encryption without intervention or awareness from users ● Protect internal application traffic on untrusted or shared infrastructure by default Usage ● Optional feature in UCP ● Deploy encryption daemonset to encrypt all host-to- host traffic between all pods within the Kubernetes cluster ● Key management and rotation managed centrally by add-on encryption module ● IPSec encryption Host Pod app Host Pod app Kubernetes Networking
  • 45.
    Kubernetes Network Encryption UCPWorker Pod app UCP Worker Pod app DS Secure Overlay Agent DS Secure Overlay Agent UCP Manager Pod Secure Overlay Manager Architecture ● Secure Overlay Manager manages and rotates keys within the cluster ● Secure Overlay Agent manages the encryption tunnels between hosts in the cluster. Does not sit in the traffic path. ● Traffic is encrypted by the in-kernel IPSec capabilities of Linux Kubernetes Networking
  • 46.
    Review ● Container Networking— Simple, IP-based. Overlay optional. Calico is shipped out of the box with Docker Enterprise ● Services — Cluster-internal load balancing. Implemented by kube-proxy ● Service Discovery — KubeDNS ● Ingress — Routing of external traffic into cluster (e.g. with NGINX) ● Network Policy — Traffic isolation within cluster. Implemented by Calico in Docker Enterprise
  • 47.