Skip to content

permadrift on location_policy with beta_private_cluster #1445

@wyardley

Description

@wyardley

TL;DR

With v4.41.0 of the terraform provider, we're seeing permadrift with autoscaling => location_policy (with module default settings for autoscaling). I was able to create a simple repro case below.

Expected behavior

Terraform to apply clean

Observed behavior

 ~ autoscaling { - location_policy = "BALANCED" -> null # (4 unchanged attributes hidden) } 

Terraform Configuration

terraform { required_version = "1.3.3" required_providers { google = { source = "hashicorp/google" version = "4.42.0" } } } variable "project" { type = string default = "xyz" } variable "region" { type = string default = "us-west2" } provider "google" { project = var.project region = var.region user_project_override = true } module "gke" { source = "terraform-google-modules/kubernetes-engine/google//modules/beta-private-cluster" version = "23.2.0" project_id = var.project name = "testcluster" regional = true region = var.region zones = ["us-west2-a", "us-west2-b"] network = "xxxx" create_service_account = false dns_cache = true enable_private_endpoint = false enable_private_nodes = true master_ipv4_cidr_block = "100.127.192.48/28" master_authorized_networks = [] master_global_access_enabled = false subnetwork = "xxx" # I'm guessing these could probably be defaults as well ip_range_pods = "xxxx" ip_range_services = "xxxx" kubernetes_version = "1.24.3-gke.2100" remove_default_node_pool = true initial_node_count = 1 gce_pd_csi_driver = true node_pools = [{ name = "default-node-pool" machine_type = "e2-standard-4" node_locations = "us-west2-a,us-west2-b" min_count = 1 max_count = 1 node_metadata = "GCE_METADATA" local_ssd_count = 0 disk_size_gb = 100 disk_type = "pd-ssd" image_type = "COS_CONTAINERD" auto_repair = true auto_upgrade = false enable_secure_boot = true preemptible = false }] }

Terraform Version

Terraform v1.3.3 on darwin_arm64 + provider registry.terraform.io/hashicorp/google v4.42.0 + provider registry.terraform.io/hashicorp/google-beta v4.42.0 + provider registry.terraform.io/hashicorp/kubernetes v2.14.0 + provider registry.terraform.io/hashicorp/random v3.4.3

Additional information

Maybe has to do with deleting default nodepool, but cluster_autoscaling seems to have balanced mode as default. https://registry.terraform.io/modules/terraform-google-modules/kubernetes-engine/google/latest/submodules/beta-private-cluster

I tried going back to 4.36, 4.38, 4.39 providers etc., and the problem persists, so maybe this is an API level change or something else?

If I run with TF_LOG=debug, I see

 "autoscaling": { "enabled": true, "minNodeCount": 1, "maxNodeCount": 1, "locationPolicy": "BALANCED" },

as well as

 "autoscaling": { "autoscalingProfile": "BALANCED" },

in API responses

Tested older and newer provider versions, but let me know if this seems to be an upstream provider (and / or Google API response changing) issue. I also tested jumping back to v22.1.0 of this module.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions