Internal load balancers (ILB) expose services within the organization from an internal IP pool assigned to the organization. An ILB service is never accessible from any endpoint outside of the organization.
By default, you can access ILB services within the same project from any cluster in the organization. The default project network policy doesn't let you access any project resources from outside the project, and this restriction applies to ILB services as well. If the Platform Administrator (PA) configures project network policies that allow access to your project from other projects, then the ILB service is also accessible from those other projects in the same organization.
Before you begin
To configure ILBs, you must have the following:
- Own the project you are configuring the load balancer for. For more information, see Create a project.
The necessary identity and access roles:
- Ask your Organization IAM Admin to grant you the Load Balancer Admin (
load-balancer-admin) role. - For global ILBs, ask your Organization IAM Admin to grant you the Global Load Balancer Admin (
global-load-balancer-admin) role. For more information, see Predefined role descriptions.
- Ask your Organization IAM Admin to grant you the Load Balancer Admin (
Create an internal load balancer
You can create global or zonal ILBs. The scope of global ILBs span across a GDC universe. The scope of zonal ILBs is limited to the zone specified at the time of creation. For more information, see Global and zonal load balancers.
Create ILBs using three different methods in GDC:
- Use the gdcloud CLI to create global or zonal ILBs.
- Use the Networking Kubernetes Resource Model (KRM) API to create global or zonal ILBs.
- Use the Kubernetes Service directly in the Kubernetes cluster. This method is only available for zonal ILBs.
You can target pod or VM workloads using the KRM API and gdcloud CLI. You can only target workloads in the cluster where the Service object is created when you use the Kubernetes Service directly from the Kubernetes cluster.
Create a zonal ILB
Create a zonal ILB using the gdcloud CLI, the KRM API, or the Kubernetes Service in the Kubernetes cluster:
gdcloud
Create an ILB that targets pod or VM workloads using the gdcloud CLI.
This ILB targets all of the workloads in the project matching the label defined in the Backend object.
To create an ILB using the gdcloud CLI, follow these steps:
Create a
Backendresource to define the endpoint for the ILB:gdcloud compute backends create BACKEND_NAME \ --labels=LABELS \ --project=PROJECT_NAME \ --zone=ZONE \ --cluster=CLUSTER_NAMEReplace the following:
BACKEND_NAME: your chosen name for the backend resource, such asmy-backend.LABELS: A selector defining which endpoints between pods and VMs to use for this backend resource. For example,app=web.PROJECT_NAME: the name of your project.ZONE: the zone to use for this invocation. To preset the zone flag for all commands that require it, run:gdcloud config set core/zone ZONE. The zone flag is available only in multi-zone environments. This field is optional.CLUSTER_NAME: the cluster to which the scope of the defined selectors is limited to. If this field is not specified, all of the endpoints with the given label are selected. This field is optional.
Skip this step if this ILB is for pod workloads. If you are configuring an ILB for VM workloads, define a health check for the ILB:
gdcloud compute health-checks create tcp HEALTH_CHECK_NAME \ --check-interval=CHECK_INTERVAL \ --healthy-threshold=HEALTHY_THRESHOLD \ --timeout=TIMEOUT \ --unhealthy-threshold=UNHEALTHY_THRESHOLD \ --port=PORT \ --zone=ZONEReplace the following:
HEALTH_CHECK_NAME: your chosen name for the health check resource, such asmy-health-check.CHECK_INTERVAL: the amount of time in seconds from the start of one probe to the start of the next one. The default value is to5. This field is optional.HEALTHY_THRESHOLD: the time to wait before claiming failure. The default value is to5. This field is optional.TIMEOUT: the amount of time in seconds to wait before claiming failure. The default value is to5. This field is optional.UNHEALTHY_THRESHOLD: the number of sequential probes that must fail for the endpoint to be considered unhealthy. The default value is to2. This field is optional.PORT: the port on which the health check is performed. The default value is to80. This field is optional.ZONE: the zone you are creating this ILB in.
Create a
BackendServiceresource and add to it the previously createdBackendresource:gdcloud compute backend-services create BACKEND_SERVICE_NAME \ --project=PROJECT_NAME \ --target-ports=TARGET_PORTS \ --zone=ZONE \ --health-check=HEALTH_CHECK_NAMEReplace the following:
BACKEND_SERVICE_NAME: the chosen name for this backend service.TARGET_PORTS: a comma-separated list of target ports that this backend service translates, where each target port specifies the protocol, the port on the forwarding rule, and the port on the backend instance. You can specify multiple target ports. This field must be in the formatprotocol:port:targetport, such asTCP:80:8080. This field is optional.HEALTH_CHECK_NAME: the name of the health check resource. This field is optional. Only include this field if you are configuring an ILB for VM workloads.
Add the
BackendServiceresource to the previously createdBackendresource:gdcloud compute backend-services add-backend BACKEND_SERVICE_NAME \ --backend=BACKEND_NAME \ --project=PROJECT_NAME \ --zone=ZONECreate an internal
ForwardingRuleresource that defines the VIP the service is available at:gdcloud compute forwarding-rules create FORWARDING_RULE_INTERNAL_NAME \ --backend-service=BACKEND_SERVICE_NAME \ --cidr=CIDR \ --ip-protocol-port=PROTOCOL_PORT \ --load-balancing-scheme=INTERNAL \ --zone=ZONE \ --project=PROJECT_NAMEReplace the following:
BACKEND_SERVICE_NAME: the name of your backend service.FORWARDING_RULE_INTERNAL_NAMEwith your chosen name for the forwarding rule.CIDR: this field is optional. If not specified, anIPv4/32CIDR is automatically reserved from the zonal IP pool. Specify the name of aSubnetresource in the same namespace as this forwarding rule. ASubnetresource represents the request and allocation information of a zonal subnet. For more information onSubnetresources, see Example custom resources.PROTOCOL_PORT: the protocol and port to expose on the forwarding rule. This field must be in the formatip-protocol=TCP:80. The exposed port must be the same as what the actual application is exposing inside of the container.
To validate the configured ILB, confirm the
Readycondition on each of the created objects. Verify the traffic with acurlrequest to the VIP:To get the assigned VIP, describe the forwarding rule:
gdcloud compute forwarding-rules describe FORWARDING_RULE_INTERNAL_NAMEVerify the traffic with a
curlrequest to the VIP at the port specified in thePROTOCOL_PORTfield in the forwarding rule:curl http://FORWARDING_RULE_VIP:PORTReplace the following:
FORWARDING_RULE_VIP: the VIP of the forwarding rule.PORT: the port number from of thePROTOCOL_PORTfield in the forwarding rule.
API
Create an ILB that targets pod or VM workloads using the KRM API. This ILB targets all of the workloads in the project matching the label defined in the Backend object.
To create a zonal ILB using the KRM API, follow these steps:
Create a
Backendresource to define the endpoints for the ILB. CreateBackendresources for each zone the workloads are placed in:kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF apiVersion: networking.gdc.goog/v1 kind: Backend metadata: namespace: PROJECT_NAME name: BACKEND_NAME spec: clusterName: CLUSTER_NAME endpointsLabels: matchLabels: app: server EOFReplace the following:
MANAGEMENT_API_SERVER: the kubeconfig path of the zonal Management API server's kubeconfig path. For more information, see Switch to a zonal context.PROJECT_NAME: the name of your project.BACKEND_NAME: the name of theBackendresource.CLUSTER_NAME: This is an optional field. This field specifies the cluster to which the scope of the defined selectors are limited to. This field does not apply to VM workloads. If aBackendresource doesn't have theclusterNamefield included, the specified labels apply to all of the workloads in the project.
You can use the same
Backendresource for each zone, or createBackendresources with different label sets for each zone.Skip this step if this ILB is for pod workloads. If you are configuring an ILB for VM workloads, define a health check for the ILB:
kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF apiVersion: networking.gdc.goog/v1 kind: HealthCheck metadata: namespace: PROJECT_NAME name: HEALTH_CHECK_NAME spec: tcpHealthCheck: port: PORT timeoutSec: TIMEOUT checkIntervalSec: CHECK_INTERVAL healthyThreshold: HEALTHY_THRESHOLD unhealthyThreshold: UNHEALTHY_THRESHOLD EOFReplace the following:
HEALTH_CHECK_NAME: your chosen name for the health check resource, such asmy-health-check.PORT: the port on which the health check is performed. The default value is to80.TIMEOUT: the amount of time in seconds to wait before claiming failure. The default value is to5.CHECK_INTERVAL: the amount of time in seconds from the start of one probe to the start of the next one. The default value is to5.HEALTHY_THRESHOLD: the number of sequential probes that must pass for the endpoint to be considered healthy. The default value is to2.UNHEALTHY_THRESHOLD: the number of sequential probes that must fail for the endpoint to be considered unhealthy. The default value is to2.
Create a
BackendServiceobject using the previously createdBackendresource. If you are configuring an ILB for VM workloads, include theHealthCheckresource.kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF apiVersion: networking.gdc.goog/v1 kind: BackendService metadata: namespace: PROJECT_NAME name: BACKEND_SERVICE_NAME spec: backendRefs: - name: BACKEND_NAME healthCheckName: HEALTH_CHECK_NAME EOFReplace the following:
BACKEND_SERVICE_NAME: the chosen name for yourBackendServiceresource.HEALTH_CHECK_NAME: the name of your previously createdHealthCheckresource. Don't include this field if you are configuring an ILB for pod workloads.
Create an internal
ForwardingRuleresource defining the VIP the service is available at.kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF apiVersion: networking.gdc.goog/v1 kind: ForwardingRuleInternal metadata: namespace: PROJECT_NAME Name: FORWARDING_RULE_INTERNAL_NAME spec: cidrRef: CIDR ports: - port: PORT Protocol: PROTOCOL backendServiceRef: name: BACKEND_SERVICE_NAME EOFReplace the following:
FORWARDING_RULE_INTERNAL_NAME: the chosen name for yourForwardingRuleInternalresource.CIDR: this field is optional. If not specified, anIPv4/32CIDR is automatically reserved from the zonal IP pool. Specify the name of aSubnetresource in the same namespace as this forwarding rule. ASubnetresource represents the request and allocation information of a zonal subnet. For more information onSubnetresources, see Example custom resources.PORT: Use theportsfield to specify an array of L4 ports for which packets are forwarded to the backends configured with this forwarding rule. At least one port has to be specified. Use theportfield to specify a port number. The exposed port must be the same as what the actual application is exposing inside of the container.PROTOCOL: the protocol to use for the forwarding rule, such asTCP. An entry in theportsarray must look like the following:ports: - port: 80 protocol: TCP
To validate the configured ILB, confirm the
Readycondition on each of the created objects. Verify the traffic with acurlrequest to the VIP:To get the VIP, use
kubectl get:kubectl get forwardingruleinternal -n PROJECT_NAMEThe output looks like the following:
NAME BACKENDSERVICE CIDR READY ilb-name BACKEND_SERVICE_NAME 10.200.32.59/32 TrueVerify the traffic with a
curlrequest to the VIP at the port specified in thePORTfield in the forwarding rule:curl http://FORWARDING_RULE_VIP:PORTReplace
FORWARDING_RULE_VIPwith the VIP of the forwarding rule.
Kubernetes Service
You can create ILBs in GDC by creating a Kubernetes Service object of type LoadBalancer in a Kubernetes cluster. This ILB only targets workloads in the cluster where the Service object is created.
To create an ILB with the Service object, follow these steps:
Create a YAML file for the
Servicedefinition of typeLoadBalancer. You must design the ILB service as internal using thenetworking.gke.io/load-balancer-type: internalannotation.The following
Serviceobject is an example of an ILB service:apiVersion: v1 kind: Service metadata: annotations: networking.gke.io/load-balancer-type: internal name: ILB_SERVICE_NAME namespace: PROJECT_NAME spec: ports: - port: 1234 protocol: TCP targetPort: 1234 selector: k8s-app: my-app type: LoadBalancerReplace the following:
ILB_SERVICE_NAME: the name of the ILB service.PROJECT_NAME: the namespace of your project that contains the backend workloads.
The
portfield configures the frontend port you expose on the VIP address. ThetargetPortfield configures the backend port to which you want to forward the traffic on the backend workloads. The load balancer supports Network Address Translation (NAT). The frontend and backend ports can be different.On the
selectorfield of theServicedefinition, specify pods or virtual machines as the backend workloads.The selector defines which workloads to take as backend workloads for this service, based on matching the labels you specify with labels on the workloads. The
Servicecan only select backend workloads in the same project and same cluster where you define theService.For more information about service selection, see https://kubernetes.io/docs/concepts/services-networking/service/.
Save the
Servicedefinition file in the same project as the backend workloads. The ILB service can only select workloads that are in the same cluster as theServicedefinition.Apply the
Servicedefinition file to the cluster:kubectl apply -f ILB_FILEReplace
ILB_FILEwith the name of theServicedefinition file for the ILB service.When you create an ILB service, the service gets an IP address. You can obtain the IP address of the ILB service by viewing the service status:
kubectl -n PROJECT_NAME get svc ILB_SERVICE_NAMEReplace the following:
PROJECT_NAME: the namespace of your project that contains the backend workloads.ILB_SERVICE_NAME: the name of the ILB service.
You must obtain an output similar to the following example:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ilb-service LoadBalancer 10.0.0.1 10.0.0.1 1234:31930/TCP 22hThe
CLUSTER-IPandEXTERNAL-IPfields must show the same value, which is the IP addressof the ILB service. This IP address is now accessible from other clusters in the organization, in accordance with the project network policies that the project has.If you don't obtain an output, ensure that you created the ILB service successfully.
GDC supports Domain Name System (DNS) names for services. However, those names only work in the same cluster for ILB services. From other clusters, you must use the IP address to access the ILB service.
Create a global ILB
Create a global ILB using the gdcloud CLI or the KRM API.
gdcloud
Create an ILB that targets pod or VM workloads using the gdcloud CLI.
This ILB targets all of the workloads in the project matching the label defined in the Backend object. The Backend custom resource must be scoped to a zone.
To create an ILB using the gdcloud CLI, follow these steps:
Create a
Backendresource to define the endpoint for the ILB:gdcloud compute backends create BACKEND_NAME \ --labels=LABELS \ --project=PROJECT_NAME \ --cluster=CLUSTER_NAME \ --zone=ZONEReplace the following:
BACKEND_NAME: your chosen name for the backend resource, such asmy-backend.LABELS: A selector defining which endpoints between pods and VMs to use for this backend resource. For example,app=web.PROJECT_NAME: the name of your project.CLUSTER_NAME: the cluster to which the scope of the defined selectors is limited to. If this field is not specified, all of the endpoints with the given label are selected. This field is optional.ZONE: the zone to use for this invocation. To preset the zone flag for all commands that require it, run:gdcloud config set core/zone ZONE. The zone flag is available only in multi-zone environments. This field is optional.
Skip this step if this ILB is for pod workloads. If you are configuring an ILB for VM workloads, define a health check for the ILB:
gdcloud compute health-checks create tcp HEALTH_CHECK_NAME \ --check-interval=CHECK_INTERVAL \ --healthy-threshold=HEALTHY_THRESHOLD \ --timeout=TIMEOUT \ --unhealthy-threshold=UNHEALTHY_THRESHOLD \ --port=PORT \ --globalReplace the following:
HEALTH_CHECK_NAME: your chosen name for the health check resource, such asmy-health-check.CHECK_INTERVAL: the amount of time in seconds from the start of one probe to the start of the next one. The default value is to5. This field is optional.HEALTHY_THRESHOLD: the time to wait before claiming failure. The default value is to5. This field is optional.TIMEOUT: the amount of time in seconds to wait before claiming failure. The default value is to5. This field is optional.UNHEALTHY_THRESHOLD: the number of sequential probes that must fail for the endpoint to be considered unhealthy. The default value is to2. This field is optional.PORT: the port on which the health check is performed. The default value is to80. This field is optional.
Create a
BackendServiceresource and add to it the previously createdBackendresource:gdcloud compute backend-services create BACKEND_SERVICE_NAME \ --project=PROJECT_NAME \ --target-ports=TARGET_PORTS \ --health-check=HEALTH_CHECK_NAME \ --globalReplace the following:
BACKEND_SERVICE_NAME: the chosen name for this backend service.TARGET_PORTS: a comma-separated list of target ports that this backend service translates, where each target port specifies the protocol, the port on the forwarding rule, and the port on the backend instance. You can specify multiple target ports. This field must be in the formatprotocol:port:targetport, such asTCP:80:8080. This field is optional.HEALTH_CHECK_NAME: the name of the health check resource. This field is optional. Only include this field if you are configuring an ILB for VM workloads.
Add the
BackendServiceresource to the previously createdBackendresource:gdcloud compute backend-services add-backend BACKEND_SERVICE_NAME \ --backend-zone BACKEND_ZONE \ --backend=BACKEND_NAME \ --project=PROJECT_NAME \ --globalCreate an internal
ForwardingRuleresource that defines the VIP the service is available at:gdcloud compute forwarding-rules create FORWARDING_RULE_INTERNAL_NAME \ --backend-service=BACKEND_SERVICE_NAME \ --cidr=CIDR \ --ip-protocol-port=PROTOCOL_PORT \ --load-balancing-scheme=INTERNAL \ --project=PROJECT_NAME \ --globalReplace the following:
FORWARDING_RULE_INTERNAL_NAME: your chosen name for the forwarding rule.CIDR: the name of aSubnetresource in the same namespace as this forwarding rule. ASubnetresource represents the request and allocation information of a global subnet. For more information onSubnetresources, see Example custom resources. If not specified, anIPv4/32CIDR is automatically reserved from the global IP pool. This field is optional.PROTOCOL_PORT: the protocol and port to expose on the forwarding rule. This field must be in the formatip-protocol=TCP:80. The exposed port must be the same as what the actual application is exposing inside of the container.
To validate the configured ILB, confirm the
Readycondition on each of the created objects. Verify the traffic with acurlrequest to the VIP:To get the assigned VIP, describe the forwarding rule:
gdcloud compute forwarding-rules describe FORWARDING_RULE_INTERNAL_NAME --globalVerify the traffic with a
curlrequest to the VIP at the port specified in thePROTOCOL_PORTfield in the forwarding rule:curl http://FORWARDING_RULE_VIP:PORTReplace the following:
FORWARDING_RULE_VIP: the VIP of the forwarding rule.PORT: the port number from of thePROTOCOL_PORTfield in the forwarding rule.
API
Create an ILB that targets pod or VM workloads using the KRM API. This ILB targets all of the workloads in the project matching the label defined in the Backend object. To create a zonal ILB using the KRM API, follow these steps:
Create a
Backendresource to define the endpoints for the ILB. CreateBackendresources for each zone the workloads are placed in:kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF apiVersion: networking.gdc.goog/v1 kind: Backend metadata: namespace: PROJECT_NAME name: BACKEND_NAME spec: clusterName: CLUSTER_NAME endpointsLabels: matchLabels: app: server EOFReplace the following:
MANAGEMENT_API_SERVER: the kubeconfig path of the global Management API server's kubeconfig path. For more information, see Switch to the global context.PROJECT_NAME: the name of your project.BACKEND_NAME: the name of theBackendresource.CLUSTER_NAME: This is an optional field. This field specifies the cluster to which the scope of the defined selectors are limited to. This field does not apply to VM workloads. If aBackendresource doesn't have theclusterNamefield included, the specified labels apply to all of the workloads in the project.
You can use the same
Backendresource for each zone, or createBackendresources with different label sets for each zone.Skip this step if this ILB is for pod workloads. If you are configuring an ILB for VM workloads, define a health check for the ILB:
apiVersion: networking.global.gdc.goog/v1 kind: HealthCheck metadata: namespace: PROJECT_NAME name: HEALTH_CHECK_NAME spec: tcpHealthCheck: port: PORT timeoutSec: TIMEOUT checkIntervalSec: CHECK_INTERVAL healthyThreshold: HEALTHY_THRESHOLD unhealthyThreshold: UNHEALTHY_THRESHOLDReplace the following:
HEALTH_CHECK_NAME: your chosen name for the health check resource, such asmy-health-check.PORT: the port on which the health check is performed. The default value is to80.TIMEOUT: the amount of time in seconds to wait before claiming failure. The default value is to5.CHECK_INTERVAL: the amount of time in seconds from the start of one probe to the start of the next one. The default value is to5.HEALTHY_THRESHOLD: the number of sequential probes that must pass for the endpoint to be considered healthy. The default value is to2.UNHEALTHY_THRESHOLD: the number of sequential probes that must fail for the endpoint to be considered unhealthy. The default value is to2.
Since this is a global ILB, create the health check in the global API.
Create a
BackendServiceobject using the previously createdBackendresource. If you are configuring an ILB for VM workloads, include theHealthCheckresource.kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF apiVersion: networking.global.gdc.goog/v1 kind: BackendService metadata: namespace: PROJECT_NAME name: BACKEND_SERVICE_NAME spec: backendRefs: - name: BACKEND_NAME zone: ZONE healthCheckName: HEALTH_CHECK_NAME targetPorts: - port: PORT protocol: PROTOCOL targetPort: TARGET_PORT EOFReplace the following:
BACKEND_SERVICE_NAME: the chosen name for yourBackendServiceresource.HEALTH_CHECK_NAME: the name of your previously createdHealthCheckresource. Don't include this field if you are configuring an ILB for pod workloads.ZONE: the zone in which theBackendresource is created. You can specify multiple backends inbackendRefsfield. For example:- name: my-be zone: Zone-A - name: my-be zone: Zone-BThe
targetPortsfield is optional. This resource lists ports that thisBackendServiceresource translates. If you are using this object, provide values for the following:PORT: the port exposed by the service.PROTOCOL: the Layer-4 protocol which traffic must match. Only TCP and UDP are supported.TARGET_PORT: the port to which thePORTvalue is translated to, such as8080. The value ofTARGET_PORTcan't be repeated in a given object. An example fortargetPortsmight look like the following:targetPorts: - port: 80 protocol: TCP targetPort: 8080
Create an internal
ForwardingRuleresource defining the VIP the service is available at.kubectl --kubeconfig MANAGEMENT_API_SERVER apply -f - <<EOF apiVersion: networking.global.gdc.goog/v1 kind: ForwardingRuleInternal metadata: namespace: PROJECT_NAME Name: FORWARDING_RULE_INTERNAL_NAME spec: cidrRef: CIDR ports: - port: PORT Protocol: PROTOCOL backendServiceRef: name: BACKEND_SERVICE_NAME EOFReplace the following:
FORWARDING_RULE_INTERNAL_NAME: the chosen name for yourForwardingRuleInternalresource.CIDR: the name of aSubnetresource in the same namespace as this forwarding rule. ASubnetresource represents the request and allocation information of a global subnet. For more information onSubnetresources, see Example custom resources. If not specified, anIPv4/32CIDR is automatically reserved from the global IP pool. This field is optional.PORT: Use theportsfield to specify an array of L4 ports for which packets are forwarded to the backends configured with this forwarding rule. At least one port has to be specified. Use theportfield to specify a port number. The exposed port must be the same as what the actual application is exposing inside of the container.PROTOCOL: the protocol to use for the forwarding rule, such asTCP. An entry in theportsarray must look like the following:ports: - port: 80 protocol: TCP
To validate the configured ILB, confirm the
Readycondition on each of the created objects. Verify the traffic with acurlrequest to the VIP:To get the VIP, use
kubectl get:kubectl get forwardingruleinternal -n PROJECT_NAMEThe output looks like the following:
NAME BACKENDSERVICE CIDR READY ilb-name BACKEND_SERVICE_NAME 10.200.32.59/32 TrueTest the traffic with a
curlrequest to the VIP at the port specified in thePORTfield in the forwarding rule:curl http://FORWARDING_RULE_VIP:PORTReplace
FORWARDING_RULE_VIPwith the VIP of the forwarding rule.