Skip to content

Firewall rule name conflicts if var.name is too long since it'll be truncated. #1527

@awx-fuyuanchu

Description

@awx-fuyuanchu

TL;DR

Firewall rule name conflicts if var.name is too long since it'll be truncated.

"gke-${substr(var.name, 0, min(25, length(var.name)))}-webhooks"

Expected behavior

Generate unique firewall rule names regardless of the length of var.name

Observed behavior

If I have two clusters with a long name variable such as
var.name = a-cluster-running-in-region-hk
var.name = a-cluster-running-in-region-sg

The firewall rule name generated for webhook will be the same since var.name will be truncated.

Terraform Configuration

Nothing special.
Just create two clusters with long names that share the same 25 characters.
And set add_master_webhook_firewall_rules to true

# google_client_config and kubernetes provider must be explicitly specified like the following. data "google_client_config" "default" {} provider "kubernetes" { host = "https://${module.gke.endpoint}" token = data.google_client_config.default.access_token cluster_ca_certificate = base64decode(module.gke.ca_certificate) } module "gke" { source = "terraform-google-modules/kubernetes-engine/google//modules/beta-private-cluster" project_id = "<PROJECT ID>" name = "a-cluster-running-in-region-sg" add_master_webhook_firewall_rules = true region = "us-central1" zones = ["us-central1-a", "us-central1-b", "us-central1-f"] network = "vpc-01" subnetwork = "us-central1-01" ip_range_pods = "us-central1-01-gke-01-pods" ip_range_services = "us-central1-01-gke-01-services" http_load_balancing = false network_policy = false horizontal_pod_autoscaling = true filestore_csi_driver = false enable_private_endpoint = true enable_private_nodes = true master_ipv4_cidr_block = "10.0.0.0/28" istio = true cloudrun = true dns_cache = false node_pools = [ { name = "default-node-pool" machine_type = "e2-medium" node_locations = "us-central1-b,us-central1-c" min_count = 1 max_count = 100 local_ssd_count = 0 spot = false local_ssd_ephemeral_count = 0 disk_size_gb = 100 disk_type = "pd-standard" image_type = "COS_CONTAINERD" enable_gcfs = false enable_gvnic = false auto_repair = true auto_upgrade = true service_account = "project-service-account@<PROJECT ID>.iam.gserviceaccount.com" preemptible = false initial_node_count = 80 }, ] node_pools_oauth_scopes = { all = [ "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/monitoring", ] } node_pools_labels = { all = {} default-node-pool = { default-node-pool = true } } node_pools_metadata = { all = {} default-node-pool = { node-pool-metadata-custom-value = "my-node-pool" } } node_pools_taints = { all = [] default-node-pool = [ { key = "default-node-pool" value = true effect = "PREFER_NO_SCHEDULE" }, ] } node_pools_tags = { all = [] default-node-pool = [ "default-node-pool", ] } }
# google_client_config and kubernetes provider must be explicitly specified like the following. data "google_client_config" "default" {} provider "kubernetes" { host = "https://${module.gke.endpoint}" token = data.google_client_config.default.access_token cluster_ca_certificate = base64decode(module.gke.ca_certificate) } module "gke" { source = "terraform-google-modules/kubernetes-engine/google//modules/beta-private-cluster" project_id = "<PROJECT ID>" name = "a-cluster-running-in-region-hk" add_master_webhook_firewall_rules = true region = "us-central1" zones = ["us-central1-a", "us-central1-b", "us-central1-f"] network = "vpc-01" subnetwork = "us-central1-01" ip_range_pods = "us-central1-01-gke-01-pods" ip_range_services = "us-central1-01-gke-01-services" http_load_balancing = false network_policy = false horizontal_pod_autoscaling = true filestore_csi_driver = false enable_private_endpoint = true enable_private_nodes = true master_ipv4_cidr_block = "10.0.0.0/28" istio = true cloudrun = true dns_cache = false node_pools = [ { name = "default-node-pool" machine_type = "e2-medium" node_locations = "us-central1-b,us-central1-c" min_count = 1 max_count = 100 local_ssd_count = 0 spot = false local_ssd_ephemeral_count = 0 disk_size_gb = 100 disk_type = "pd-standard" image_type = "COS_CONTAINERD" enable_gcfs = false enable_gvnic = false auto_repair = true auto_upgrade = true service_account = "project-service-account@<PROJECT ID>.iam.gserviceaccount.com" preemptible = false initial_node_count = 80 }, ] node_pools_oauth_scopes = { all = [ "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/monitoring", ] } node_pools_labels = { all = {} default-node-pool = { default-node-pool = true } } node_pools_metadata = { all = {} default-node-pool = { node-pool-metadata-custom-value = "my-node-pool" } } node_pools_taints = { all = [] default-node-pool = [ { key = "default-node-pool" value = true effect = "PREFER_NO_SCHEDULE" }, ] } node_pools_tags = { all = [] default-node-pool = [ "default-node-pool", ] } }

Terraform Version

Terraform v1.0.8

Additional information

Error log

module.gke.google_compute_firewall.master_webhooks[0]: Creating... ╷ │ Error: Error creating Firewall: googleapi: Error 409: The resource 'projects/xxxxxx/global/firewalls/gke-xxxxxxxxxxxxxxxxxxxxxxxxx-webhooks' already exists, alreadyExists │ │ with module.gke.google_compute_firewall.master_webhooks[0], │ on .terraform/modules/gke/modules/beta-private-cluster/firewall.tf line 93, in resource "google_compute_firewall" "master_webhooks": │ 93: resource "google_compute_firewall" "master_webhooks" { │ 

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions