- Notifications
You must be signed in to change notification settings - Fork 1.2k
Description
TL;DR
We are trying to create private nodes with our specified pod ip range
So we set the variable enable_private_nodes to true.
After the change, the terraform plan is broken. It seems the provider kubernetes couldn't make a request with error
Error: error creating NodePool: googleapi: Error 400: EnablePrivateNodes must be enabled for private clusters with valid masterIpv4Cidr., badRequest
I find there's no variables for changing this in resource "google_container_node_pool" "pools"
Expected behavior
a new node should be created as described below:
module.gke.google_container_node_pool.pools["default-node-pool"] will be created with such network config
resource "google_container_node_pool" "pools"
network_config {
enable_private_nodes = (known after apply)
pod_ipv4_cidr_block = (known after apply)
pod_range = "blablabla"
}
Observed behavior
Error: error creating NodePool: googleapi: Error 400: EnablePrivateNodes must be enabled for private clusters with valid masterIpv4Cidr., badRequest with module.gke.google_container_node_pool.pools["default-node-pool"], on .terraform/modules/gke/modules/beta-private-cluster/cluster.tf line 416, in resource "google_container_node_pool" "pools": 416: resource "google_container_node_pool" "pools" {
Terraform Configuration
We use the module beta-private-cluster just like the example in README.md. # google_client_config and kubernetes provider must be explicitly specified like the following. data "google_client_config" "default" {} provider "kubernetes" { host = "https://${module.gke.endpoint}" # Here we reference the endpoint of the cluster using the output of module gke token = data.google_client_config.default.access_token cluster_ca_certificate = base64decode(module.gke.ca_certificate) } module "gke" { source = "terraform-google-modules/kubernetes-engine/google//modules/beta-private-cluster" project_id = "<PROJECT ID>" name = "gke-test-1" region = "us-central1" zones = ["us-central1-a", "us-central1-b", "us-central1-f"] network = "vpc-01" subnetwork = "us-central1-01" ip_range_pods = "us-central1-01-gke-01-pods" ip_range_services = "us-central1-01-gke-01-services" http_load_balancing = false network_policy = false horizontal_pod_autoscaling = true filestore_csi_driver = false enable_private_endpoint = var.enable_private_endpoint # var.enable_private_endpoint set to true to enable the private endpoint enable_private_nodes = true master_ipv4_cidr_block = "10.0.0.0/28" istio = true cloudrun = true dns_cache = false master_authorized_networks = var.enable_private_endpoint ? [] : concat(var.master_authorized_networks, var.additional_master_authorized_networks) # remove master_authorized_networks once we enable the private endpoint node_pools = [ { name = "blabla" machine_type = "e2-standard-2" min_count = 0 max_count = 100 disk_size_gb = 100 disk_type = "pd-ssd" image_type = "COS_CONTAINERD" auto_repair = true auto_upgrade = true preemptible = false spot = false initial_node_count = 1 max_pods_per_node = 110 node_locations = "asia-southeast1-a,asia-southeast1-b" pod_range = "SPECIFIED POD RANGE" } ]
Terraform Version
$ terraform --version Terraform v1.0.9 on linux_amd64
Additional information
No response