@@ -26,7 +26,8 @@ or configure topology spread constraints for individual workloads.
2626例如区域(Region)、可用区(Zone)、节点和其他用户自定义拓扑域。
2727这样做有助于实现高可用并提升资源利用率。
2828
29- 你可以将[ 集群级约束] ( #cluster-level-default-constraints ) 设为默认值,或为个别工作负载配置拓扑分布约束。
29+ 你可以将[ 集群级约束] ( #cluster-level-default-constraints ) 设为默认值,
30+ 或为个别工作负载配置拓扑分布约束。
3031
3132<!-- body -->
3233
@@ -62,19 +63,20 @@ are split across three different datacenters (or infrastructure zones). Now you
6263have less concern about a single node failure, but you notice that latency is
6364higher than you'd like, and you are paying for network costs associated with
6465sending network traffic between the different zones.
65-
66- You decide that under normal operation you'd prefer to have a similar number of replicas
67- [scheduled](/docs/concepts/scheduling-eviction/) into each infrastructure zone,
68- and you'd like the cluster to self-heal in the case that there is a problem.
69-
70- Pod topology spread constraints offer you a declarative way to configure that.
7166-->
7267随着你的工作负载扩容,运行的 Pod 变多,将需要考虑另一个重要问题。
7368假设你有 3 个节点,每个节点运行 5 个 Pod。这些节点有足够的容量能够运行许多副本;
7469但与这个工作负载互动的客户端分散在三个不同的数据中心(或基础设施可用区)。
7570现在你可能不太关注单节点故障问题,但你会注意到延迟高于自己的预期,
7671在不同的可用区之间发送网络流量会产生一些网络成本。
7772
73+ <!--
74+ You decide that under normal operation you'd prefer to have a similar number of replicas
75+ [scheduled](/docs/concepts/scheduling-eviction/) into each infrastructure zone,
76+ and you'd like the cluster to self-heal in the case that there is a problem.
77+
78+ Pod topology spread constraints offer you a declarative way to configure that.
79+ -->
7880你决定在正常运营时倾向于将类似数量的副本[ 调度] ( /zh-cn/docs/concepts/scheduling-eviction/ )
7981到每个基础设施可用区,且你想要该集群在遇到问题时能够自愈。
8082
@@ -221,7 +223,13 @@ your cluster. Those fields are:
221223 will try to put a balanced number of pods into each domain.
222224Also, we define an eligible domain as a domain whose nodes meet the requirements of
223225nodeAffinityPolicy and nodeTaintsPolicy.
226+ -->
227+ - ** topologyKey** 是[ 节点标签] ( #node-labels ) 的键。如果节点使用此键标记并且具有相同的标签值,
228+ 则将这些节点视为处于同一拓扑域中。我们将拓扑域中(即键值对)的每个实例称为一个域。
229+ 调度器将尝试在每个拓扑域中放置数量均衡的 Pod。
230+ 另外,我们将符合条件的域定义为其节点满足 nodeAffinityPolicy 和 nodeTaintsPolicy 要求的域。
224231
232+ <!--
225233- **whenUnsatisfiable** indicates how to deal with a Pod if it doesn't satisfy the spread constraint:
226234 - `DoNotSchedule` (default) tells the scheduler not to schedule it.
227235 - `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew.
@@ -232,11 +240,6 @@ your cluster. Those fields are:
232240 See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors)
233241 for more details.
234242-->
235- - ** topologyKey** 是[ 节点标签] ( #node-labels ) 的键。如果节点使用此键标记并且具有相同的标签值,
236- 则将这些节点视为处于同一拓扑域中。我们将拓扑域中(即键值对)的每个实例称为一个域。
237- 调度器将尝试在每个拓扑域中放置数量均衡的 Pod。
238- 另外,我们将符合条件的域定义为其节点满足 nodeAffinityPolicy 和 nodeTaintsPolicy 要求的域。
239-
240243- ** whenUnsatisfiable** 指示如果 Pod 不满足分布约束时如何处理:
241244 - ` DoNotSchedule ` (默认)告诉调度器不要调度。
242245 - ` ScheduleAnyway ` 告诉调度器仍然继续调度,只是根据如何能将偏差最小化来对节点进行排序。
@@ -434,12 +437,6 @@ Usually, if you are using a workload controller such as a Deployment, the pod te
434437takes care of this for you. If you mix different spread constraints then Kubernetes
435438follows the API definition of the field; however, the behavior is more likely to become
436439confusing and troubleshooting is less straightforward.
437-
438- You need a mechanism to ensure that all the nodes in a topology domain (such as a
439- cloud provider region) are labelled consistently.
440- To avoid you needing to manually label nodes, most clusters automatically
441- populate well-known labels such as `kubernetes.io/hostname`. Check whether
442- your cluster supports this.
443440-->
444441## 一致性 {#Consistency}
445442
@@ -449,6 +446,13 @@ your cluster supports this.
449446如果你混合不同的分布约束,则 Kubernetes 会遵循该字段的 API 定义;
450447但是,该行为可能更令人困惑,并且故障排除也没那么简单。
451448
449+ <!--
450+ You need a mechanism to ensure that all the nodes in a topology domain (such as a
451+ cloud provider region) are labelled consistently.
452+ To avoid you needing to manually label nodes, most clusters automatically
453+ populate well-known labels such as `kubernetes.io/hostname`. Check whether
454+ your cluster supports this.
455+ -->
452456你需要一种机制来确保拓扑域(例如云提供商区域)中的所有节点具有一致的标签。
453457为了避免你需要手动为节点打标签,大多数集群会自动填充知名的标签,
454458例如 `kubernetes.io/hostname`。检查你的集群是否支持此功能。
@@ -822,7 +826,7 @@ An example configuration might look like follows:
822826配置的示例可能看起来像下面这个样子:
823827
824828```yaml
825- apiVersion: kubescheduler.config.k8s.io/v1beta3
829+ apiVersion: kubescheduler.config.k8s.io/v1
826830kind: KubeSchedulerConfiguration
827831
828832profiles:
@@ -894,7 +898,7 @@ empty `defaultConstraints` in the `PodTopologySpread` plugin configuration:
894898并将 `PodTopologySpread` 插件配置中的 `defaultConstraints` 参数置空来禁用默认 Pod 分布约束:
895899
896900` ` ` yaml
897- apiVersion: kubescheduler.config.k8s.io/v1beta3
901+ apiVersion: kubescheduler.config.k8s.io/v1
898902kind: KubeSchedulerConfiguration
899903
900904profiles:
0 commit comments