Resource Management for Pods and Containers describes how to set resource requets and limits for "regular" pods in Kubernetes. Is there a supported/reccomended way to set these limits for control plane components such as kube-apiserver?
Things I considered:
- Modifying static manifests, e.g. in
/etc/kubernetes/manifests/kube-apiserver.yaml. This could work but it will be overwritten bykubeadmduring next upgrade. - Setting
kube-reservedorsystem-reservedflags. This could work too however again - they are defined in just one ConfigMap (e.g.kubelet-config-1.21) and will be overwritten bykubeadmduring node upgrade. The same limits will apply to control plane nodes and worker nodes and I don't want that.
I can overcome this with something like ansible but then ansible will be "figting" with kubeadm and I'd like to avoid that.
What problem am I trying to solve?
I have a small homelab kubernetes installation. I'd like to allow running regular pods on control plane node(s), but I'd like to be able to reserve some resources (primarily memory) for control plane components. I.e. I'd like to be able to set requests on things like kube-apiserver so that scheduler knows not to put any other pods (they will also have appropriate requests) in its place.