Skip to content

Conversation

@kartikjoshi21
Copy link
Contributor

@kartikjoshi21 kartikjoshi21 commented Dec 8, 2025

  • Make kubeadm configs IP-family aware:

    • Emit podSubnet/serviceSubnet as v4, v6 or dual (comma-separated)
    • Use IPv6 advertiseAddress/node-ip for ipv6 clusters, leave node-ip unset
      for dual-stack so kubelet can advertise both families
    • Bind kube-apiserver to :: for ipv6/dual and set KubeProxy metrics
      bind address to 0.0.0.0:10249 (ipv4/dual) or [::]:10249 (ipv6-only)
    • Thread API server cert SANs from Go into the kubeadm templates
  • Update kubelet/certs/bootstrapper:

    • Pick kubelet node-ip from Node.IPv6 for ipv6 clusters, fallback to IPv4
    • Include v4 and v6 service VIPs plus 127.0.0.1/::1 in apiserver cert SANs
    • Add control-plane.minikube.internal A/AAAA records in /etc/hosts
      (IPv4-only, IPv6-only, or both in dual-stack)
  • Make endpoints & kubeconfig IPv6 safe:

    • Treat literal IPv6 addresses correctly in ControlPlaneEndpoint and
      kubeconfig URLs (https://[::1]:8443)
    • Keep existing IPv4 behaviour unchanged
  • Ensure core Services follow the chosen family:

    • After init / node-ready / restart, patch kube-dns Service with the
      desired ipFamilyPolicy + ipFamilies (SingleStack IPv4/IPv6, or
      PreferDualStack [IPv4, IPv6])
  • Extend ServiceClusterIP() to handle comma-separated Service CIDRs and
    prefer IPv4 when present, so dual-stack serviceSubnet values do not
    break certificate generation.

Fixes: #8535
Refer to this for testing steps: #22064 (comment)
Tested and verified on:

PS C:\Users\kartikjoshi> wsl --version WSL version: 2.6.2.0 Kernel version: 6.6.87.2-1 WSLg version: 1.0.71 MSRDC version: 1.2.6353 Direct3D version: 1.611.1-81528511 DXCore version: 10.0.26100.1-240331-1435.ge-release Windows version: 10.0.26200.7093 
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: kartikjoshi21
Once this PR has been reviewed and has the lgtm label, please assign prezha for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Dec 8, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @kartikjoshi21. Thanks for your PR.

I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Dec 8, 2025
@minikube-bot
Copy link
Collaborator

Can one of the admins verify this patch?

@kartikjoshi21
Copy link
Contributor Author

Logs:

kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ ./out/minikube start -p v4-only \ --driver=docker \ --ip-family=ipv4 \ --cni=bridge \ --service-cluster-ip-range=10.96.0.0/12 \ --pod-cidr=10.244.0.0/16 😄 [v4-only] minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64) ✨ Using the docker driver based on user configuration 📌 Using Docker driver with root privileges 👍 Starting "v4-only" primary control-plane node in "v4-only" cluster 🚜 Pulling base image v0.0.48-1763789673-21948 ... 💾 Downloading Kubernetes v1.34.1 preload ... > preloaded-images-k8s-v18-v1...: 337.01 MiB / 337.01 MiB 100.00% 3.71 Mi 🔥 Creating docker container (CPUs=2, Memory=3072MB) ... 🐳 Preparing Kubernetes v1.34.1 on Docker 29.0.2 ... 🔗 Configuring bridge CNI (Container Networking Interface) ... 🔎 Verifying Kubernetes components... ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 🌟 Enabled addons: storage-provisioner, default-storageclass 🏄 Done! kubectl is now configured to use "v4-only" cluster and "default" namespace by default kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ minikube ssh -p v4-only -- 'sudo cat /var/tmp/minikube/kubeadm.yaml' apiVersion: kubeadm.k8s.io/v1beta4 kind: InitConfiguration localAPIEndpoint: advertiseAddress: "192.168.58.2" bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: unix:///var/run/cri-dockerd.sock name: "v4-only" kubeletExtraArgs: - name: "node-ip" value: "192.168.58.2" taints: [] --- apiVersion: kubeadm.k8s.io/v1beta4 kind: ClusterConfiguration apiServer: certSANs: - "control-plane.minikube.internal" - "127.0.0.1" extraArgs: - name: "enable-admission-plugins" value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: - name: "allocate-node-cidrs" value: "true" - name: "leader-elect" value: "false" scheduler: extraArgs: - name: "leader-elect" value: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: "control-plane.minikube.internal:8443" etcd: local: dataDir: /var/lib/minikube/etcd kubernetesVersion: v1.34.1 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: "10.96.0.0/12" --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock hairpinMode: hairpin-veth runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: "0.0.0.0:10249" conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ minikube ssh -p v4-only -- 'grep control-plane.minikube.internal /etc/hosts' 192.168.58.2 control-plane.minikube.internal kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl --context v4-only get svc kube-dns -n kube-system -o yaml apiVersion: v1 kind: Service metadata: annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" creationTimestamp: "2025-12-04T10:53:26Z" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: CoreDNS name: kube-dns namespace: kube-system resourceVersion: "278" uid: b517354c-9628-4c12-9d01-3a4172fef1f0 spec: clusterIP: 10.96.0.10 clusterIPs: - 10.96.0.10 internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: dns port: 53 protocol: UDP targetPort: 53 - name: dns-tcp port: 53 protocol: TCP targetPort: 53 - name: metrics port: 9153 protocol: TCP targetPort: 9153 selector: k8s-app: kube-dns sessionAffinity: None type: ClusterIP status: loadBalancer: {} kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ minikube delete -p v6-only docker network rm v6-only || true # clean any stale network ./out/minikube start -p v6-only \ --driver=docker \ --cni=bridge \ --ip-family=ipv6 \ --pod-cidr-v6=fd11:11::/64 \ --service-cluster-ip-range-v6=fd00:100::/108 # NOTE: no --subnet-v6 and no --static-ipv6 🔥 Deleting "v6-only" in docker ... 🔥 Deleting container "v6-only" ... 🔥 Removing /home/kartikjoshi/.minikube/machines/v6-only ... 💀 Removed all traces of the "v6-only" cluster. v6-only 😄 [v6-only] minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64) ✨ Using the docker driver based on user configuration 📌 Using Docker driver with root privileges 💡 If Docker daemon IPv6 is disabled, enable it in /etc/docker/daemon.json and restart: {"ipv6": true, "fixed-cidr-v6": "fd00:55:66::/64"} 👍 Starting "v6-only" primary control-plane node in "v6-only" cluster 🚜 Pulling base image v0.0.48-1763789673-21948 ... 🔥 Creating docker container (CPUs=2, Memory=3072MB) ... 🐳 Preparing Kubernetes v1.34.1 on Docker 29.0.2 ... 🔗 Configuring bridge CNI (Container Networking Interface) ... 🔎 Verifying Kubernetes components... ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 🌟 Enabled addons: storage-provisioner, default-storageclass 🏄 Done! kubectl is now configured to use "v6-only" cluster and "default" namespace by default kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME v6-only Ready control-plane 4m38s v1.34.1 fd00::2 <none> Debian GNU/Linux 12 (bookworm) 6.6.87.2-microsoft-standard-WSL2 docker://29.0.2 kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ ./out/minikube ssh -p v6-only -- 'sudo cat /var/tm p/minikube/kubeadm.yaml' apiVersion: kubeadm.k8s.io/v1beta4 kind: InitConfiguration localAPIEndpoint: advertiseAddress: "fd00::2" bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: unix:///var/run/cri-dockerd.sock name: "v6-only" kubeletExtraArgs: - name: "node-ip" value: "fd00::2" taints: [] --- apiVersion: kubeadm.k8s.io/v1beta4 kind: ClusterConfiguration apiServer: certSANs: - "control-plane.minikube.internal" - "::1" extraArgs: - name: "bind-address" value: "::" - name: "enable-admission-plugins" value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: - name: "allocate-node-cidrs" value: "true" - name: "leader-elect" value: "false" scheduler: extraArgs: - name: "leader-elect" value: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: "[fd00::2]:8443" etcd: local: dataDir: /var/lib/minikube/etcd kubernetesVersion: v1.34.1 networking: dnsDomain: cluster.local podSubnet: "fd11:11::/64" serviceSubnet: "fd00:100::/108" --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock hairpinMode: hairpin-veth runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "fd11:11::/64" metricsBindAddress: "[::]:10249" conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ ./out/minikube start --driver=docker --ip-fam ily=dual --service-cluster-ip-range=10.96.0.0/12 --service-cluster-ip-range-v6=fd00:200::/108 --pod-cidr=10. 244.0.0/16 --pod-cidr-v6=fd11:22::/64 😄 minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64) ✨ Using the docker driver based on user configuration 📌 Using Docker driver with root privileges 💡 If Docker daemon IPv6 is disabled, enable it in /etc/docker/daemon.json and restart: {"ipv6": true, "fixed-cidr-v6": "fd00:55:66::/64"} 👍 Starting "minikube" primary control-plane node in "minikube" cluster 🚜 Pulling base image v0.0.48-1763789673-21948 ... 💾 Downloading Kubernetes v1.34.1 preload ... > preloaded-images-k8s-v18-v1...: 337.01 MiB / 337.01 MiB 100.00% 3.22 Mi 🔥 Creating docker container (CPUs=2, Memory=3072MB) ... 🐳 Preparing Kubernetes v1.34.1 on Docker 29.0.2 ... 🔗 Configuring bridge CNI (Container Networking Interface) ... 🔎 Verifying Kubernetes components... ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 🌟 Enabled addons: storage-provisioner, default-storageclass 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ minikube ssh -- 'sudo sed -n "1,220p" /var/tmp/minikube/kubeadm.yaml' apiVersion: kubeadm.k8s.io/v1beta4 kind: InitConfiguration localAPIEndpoint: advertiseAddress: "192.168.58.2" bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: unix:///var/run/cri-dockerd.sock name: "minikube" taints: [] --- apiVersion: kubeadm.k8s.io/v1beta4 kind: ClusterConfiguration apiServer: certSANs: - "control-plane.minikube.internal" - "127.0.0.1" - "::1" extraArgs: - name: "bind-address" value: "::" - name: "enable-admission-plugins" value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: - name: "allocate-node-cidrs" value: "true" - name: "leader-elect" value: "false" scheduler: extraArgs: - name: "leader-elect" value: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: "control-plane.minikube.internal:8443" etcd: local: dataDir: /var/lib/minikube/etcd kubernetesVersion: v1.34.1 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16,fd11:22::/64" serviceSubnet: "10.96.0.0/12,fd00:200::/108" --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock hairpinMode: hairpin-veth runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16,fd11:22::/64" metricsBindAddress: "0.0.0.0:10249" conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl get svc kube-dns -n kube-system -o yaml | sed -n '1,80p' apiVersion: v1 kind: Service metadata: annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" creationTimestamp: "2025-12-05T08:00:49Z" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: CoreDNS name: kube-dns namespace: kube-system resourceVersion: "293" uid: 35db8ff7-11be-4336-a315-51452beb8b99 spec: clusterIP: 10.96.0.10 clusterIPs: - 10.96.0.10 - fd00:200::c60 internalTrafficPolicy: Cluster ipFamilies: - IPv4 - IPv6 ipFamilyPolicy: PreferDualStack ports: - name: dns port: 53 protocol: UDP targetPort: 53 - name: dns-tcp port: 53 protocol: TCP targetPort: 53 - name: metrics port: 9153 protocol: TCP targetPort: 9153 selector: k8s-app: kube-dns sessionAffinity: None type: ClusterIP status: loadBalancer: {} kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Namespace metadata: name: dualtest --- apiVersion: v1 kind: Pod metadata: name: dual-nginx namespace: dualtest labels: app: dual-nginx spec: containers: - name: nginx image: nginx:stable ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: dual-nginx-svc namespace: dualtest spec: selector: app: dual-nginx ports: - port: 80 targetPort: 80 ipFamilyPolicy: PreferDualStack EOF namespace/dualtest created pod/dual-nginx created service/dual-nginx-svc created kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl get svc dual-nginx-svc -n dualtest -o yaml | sed -n '1,80p' apiVersion: v1 kind: Service metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"dual-nginx-svc","namespace":"dualtest"},"spec":{"ipFamilyPolicy":"PreferDualStack","ports":[{"port":80,"targetPort":80}],"selector":{"app":"dual-nginx"}}} creationTimestamp: "2025-12-05T08:14:43Z" name: dual-nginx-svc namespace: dualtest resourceVersion: "1059" uid: 7be9e20a-b745-4b6f-964c-5790f35b2234 spec: clusterIP: 10.104.18.22 clusterIPs: - 10.104.18.22 - fd00:200::c17f internalTrafficPolicy: Cluster ipFamilies: - IPv4 - IPv6 ipFamilyPolicy: PreferDualStack ports: - port: 80 protocol: TCP targetPort: 80 selector: app: dual-nginx sessionAffinity: None type: ClusterIP status: loadBalancer: {} 
@kartikjoshi21
Copy link
Contributor Author

0. Build + clean # From minikube repo root make ./out/minikube delete --all --purge docker network rm minikube || true 
1. IPv4-only sanity ./out/minikube start \ --driver=docker \ --ip-family=ipv4 kubectl get nodes -o wide kubectl get pods -A # kubeconfig endpoint (should be IPv4, no brackets) kubectl config view --minify \ -o jsonpath='{.clusters[0].cluster.server}'; echo # control-plane alias in /etc/hosts ./out/minikube ssh -- 'grep control-plane.minikube.internal /etc/hosts || echo "no alias"' # kube-dns service family kubectl -n kube-system get svc kube-dns -o yaml | sed -n '1,40p' 
2. IPv6-only ./out/minikube delete --all --purge docker network rm minikube || true ./out/minikube start \ --driver=docker \ --ip-family=ipv6 kubectl get nodes -o wide kubectl get pods -A # Kubernetes service & endpoints kubectl get svc kubernetes -o yaml | sed -n '1,40p' kubectl get endpoints kubernetes -o yaml | sed -n '1,40p' # kube-dns service should be IPv6 kubectl -n kube-system get svc kube-dns -o yaml | sed -n '1,40p' # kubeconfig should have bracketed IPv6 kubectl config view --minify \ -o jsonpath='{.clusters[0].cluster.server}'; echo # control-plane alias in /etc/hosts (IPv6) ./out/minikube ssh -- 'grep control-plane.minikube.internal /etc/hosts || echo "no alias"' # Direct apiserver check via endpoints IP KUBE_API_V6=$(kubectl get endpoints kubernetes -o jsonpath='{.subsets[0].addresses[0].ip}') ./out/minikube ssh -- "curl -k https://[${KUBE_API_V6}]:8443/version" Known limitation to call out in PR: # From coredns pod or node, this typically fails today: ./out/minikube ssh -- 'curl -vk https://[fd00::1]:443/version || echo "via Service VIP failed (expected with current stack)"' kubectl logs -n kube-system -l k8s-app=kube-dns --tail=50 IPv6 Service VIP (fd00::1) is not yet routable from pods (CoreDNS logs network is unreachable). This is a follow-up CNI / kube-proxy dataplane issue, not fully solved by this PR. 
3. Dual-stack ./out/minikube delete --all --purge docker network rm minikube || true ./out/minikube start \ --driver=docker \ --ip-family=dual kubectl get nodes -o wide kubectl get pods -A # kube-dns should be dual-stack kubectl -n kube-system get svc kube-dns -o yaml | sed -n '1,40p' # control-plane alias should have both A + AAAA ./out/minikube ssh -- 'grep control-plane.minikube.internal /etc/hosts || echo "no alias"' # kubeconfig endpoint (likely IPv4, but IPv6-aware) kubectl config view --minify \ -o jsonpath='{.clusters[0].cluster.server}'; echo 
Copy link

@illume illume left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice one 🎉

I added a few questions/notes.

ExtraArgs []string // a list of any extra option to pass to oci binary during creation time, for example --expose 8080...
ListenAddress string // IP Address to listen to
GPUs string // add GPU devices to the container
Subnetv6 string
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add missing docs, and align them with the other comments.

klog.Infof("%s network %s %s created", ociBin, networkName, p.CIDR)
return gw, nil
}
// don't retry if error is not address is taken
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a second "kic: plumb ip-family and ipv6 for docker bridge" commit that could be merged with the first one.

nodeRegistration:
criSocket: {{if .CRISocket}}{{.CRISocket}}{{else}}/var/run/dockershim.sock{{end}}
name: "{{.NodeName}}"
{{- if .NodeIP }}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering about rendering these things like NodeIP with ipv6 stuff in them, and : which breaks yaml? For example [::]:10249 ?

I see you have quotes elsewhere, but maybe a few spots are missing?

return fmt.Sprintf(`^[[:space:]]*[:0-9A-Fa-f]+[[:space:]]+%s$`, qName)
}

func addHostAliasCommand(dropRegex, record string, sudo bool, destPath string) *exec.Cmd {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs documentation.

script := fmt.Sprintf(
`{ grep -v $'\t%s$' "%s"; echo "%s"; } > /tmp/h.$$; %s cp /tmp/h.$$ "%s"`,
name,
`{ grep -v -F "%s" "%s"; echo "%s"; } > /tmp/h.$$; %s cp /tmp/h.$$ "%s"`,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see you have dropRegex param... but maybe forgot to use it here?

case "ipv6":
return "[::]:10249"
default: // ipv4 or dual
return "0.0.0.0:10249"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In dual stack does this mean it's exposed only in ipv4? I would expect in the case "dual": to return "[::]:10249" so it binds to both?

Or maybe a comment somewhere to say that dual only listens on ipv4 if that is the intention.

}

return hostname, ips[0], port, nil
return host, ips[0], port, nil
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you want to always take the first ip resolved? For ipv6/dual you probably want to prefer an AAAA if present?

What I mean is, something like this code (untested):

// pickResolvedIP deterministically prefers AAAA for ipv6/dual, A for ipv4. func pickResolvedIP(ips []net.IP, ipFamily string) net.IP { preferV6 := strings.EqualFold(ipFamily, "ipv6") || strings.EqualFold(ipFamily, "dual") var v6, v4 []net.IP for _, ip := range ips { if ip.To4() == nil { v6 = append(v6, ip) } else { v4 = append(v4, ip) } } if preferV6 { if len(v6) > 0 { return v6[0] } if len(v4) > 0 { return v4[0] } } else { if len(v4) > 0 { return v4[0] } if len(v6) > 0 { return v6[0] } } return nil }

Or maybe just take what the resolver gives as you first (as you do now) is better?

return nil
}

func advertiseIP(cc config.ClusterConfig, n config.Node) string {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one needs documentation.

certificatesDir: {{.CertDir}}
clusterName: mk
controlPlaneEndpoint: {{.ControlPlaneAddress}}:{{.APIServerPort}}
controlPlaneEndpoint: "{{.ControlPlaneEndpoint}}"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use {{.ControlPlaneEndpoint}} in v1beta2/3 templates as well (instead of concatenating address+port) so IPv6 bracketing/alias is uniform across all kubeadm configs?

{{end -}}{{end -}}
certificatesDir: {{.CertDir}}
clusterName: mk
controlPlaneEndpoint: {{.ControlPlaneAddress}}:{{.APIServerPort}}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use {{.ControlPlaneEndpoint}} like v1beta4 templates as well (instead of concatenating address+port) so IPv6 bracketing/alias is uniform across all kubeadm configs?

@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

4 participants