Note
This feature is available at the Preview release level.
The Cloud Run External Metrics Autoscaling (CREMA) project leverages KEDA to provide autoscaling for Cloud Run services and worker pools.
This project currently depends on KEDA v2.17. The included table lists various KEDA scalers and their compatibility for use with Cloud Run.
| Scalers | Cloud Run Compatible |
|---|---|
| Apache Kafka | Verified |
| Cron | Verified |
| Github Runner Scaler | Verified |
| CPU | Incompatible |
| Kubernetes Workload | Incompatible |
| Memory | Incompatible |
See https://keda.sh/docs/2.17/scalers/ for the full list of KEDA's scalers. The compatibility for any KEDA scaler not listed above is currently unknown. Please file an issue if you believe a scaler does not work.
Follow the instructions below to configure, deploy, and verify your CREMA service.
- Google Cloud SDK: Ensure you have the Google Cloud SDK installed and configured.
- Authentication: Authenticate with Google Cloud:
gcloud auth login gcloud auth application-default login
- Project Configuration: Set your default project: Replace
gcloud config set project MY_PROJECT_IDMY_PROJECT_IDwith your actual Google Cloud project ID.
Create a GCP service account that will be used by the Cloud Run CREMA service. We'll grant this service account the necessary permissions throughout the setup. Those permissions will be:
Parameter Manager Parameter Viewer(roles/parametermanager.parameterViewer) to retrieve from Parameter Manager the CREMA configuration you'll be creating.Cloud Run Developer(roles/run.developer) andService Account User(roles/iam.serviceAccountUser) to set the number of instances in your scaled workloads.
PROJECT_ID=my-project CREMA_SERVICE_ACCOUNT_NAME=crema-service-account gcloud iam service-accounts create $CREMA_SERVICE_ACCOUNT_NAME \ --display-name="CREMA Service Account"Follow the steps below to create a yaml configuration file for CREMA in Parameter Manager.
Create a Parameter in Parameter Manager to store your CREMA config. This parameter is where you will store Parameter Versions to be used by CREMA:
PARAMETER_ID=crema-config PARAMETER_REGION=global gcloud parametermanager parameters create $PARAMETER_ID --location=$PARAMETER_REGION --parameter-format=YAMLLocally, create a YAML file for your CREMA configuration. See the Configuration README for reference.
Upload your local YAML file as a new parameter version:
LOCAL_YAML_CONFIG_FILE=./my-crema-config.yaml PARAMETER_ID=crema-config PARAMETER_REGION=global PARAMETER_VERSION=1 gcloud parametermanager parameters versions create $PARAMETER_VERSION \ --location=$PARAMETER_REGION \ --parameter=$PARAMETER_ID \ --payload-data-from-file=$LOCAL_YAML_CONFIG_FILEGrant your CREMA service account permission to read from Parameter Manager:
PROJECT_ID=my-project CREMA_SERVICE_ACCOUNT_NAME=crema-service-account gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:$CREMA_SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/parametermanager.parameterViewer"Grant your CREMA service account permission to scale the services and worker pools that you've specified in your CREMA configuration. This can be done by granting roles/run.developer at the project level or for each individual instance service or worker pool to be scaled.
Granting the required permissions at the project level will enable CREMA to scale any services or worker pools that you specify in the configuration--you'll be able to add more services/worker pools in the future without having to further modify permissions. To grant these permissions at the project level:
PROJECT_ID=my-project CREMA_SERVICE_ACCOUNT_NAME=crema-service-account gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:$CREMA_SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/run.developer"Alternatively, granting the required permissions for each individual service or worker pool minimizes the permissions to strictly what's necessary and is considered a security best practice. To grant these permissions for each individual service or worker pool:
# For a service PROJECT_ID=my-project SERVICE_NAME=my-service-to-be-scaled SERVICE_REGION=us-central1 CREMA_SERVICE_ACCOUNT_NAME=crema-service-account gcloud run services add-iam-policy-binding $SERVICE_NAME \ --region=$SERVICE_REGION \ --member="serviceAccount:$CREMA_SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/run.developer" # For a worker pool PROJECT_ID=my-project WORKER_POOL_NAME=my-worker-pool-to-be-scaled WORKER_POOL_REGION=us-central1 CREMA_SERVICE_ACCOUNT_NAME=crema-service-account gcloud alpha run worker-pools add-iam-policy-binding $WORKER_POOL_NAME \ --region=$WORKER_POOL_REGION \ --member="serviceAccount:$CREMA_SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/run.developer"Grant your CREMA service account roles/iam.serviceAccountUser on the service accounts which run the services and worker pools to be scaled:
PROJECT_ID=my-project CONSUMER_SERVICE_ACCOUNT_NAME=my-worker-pool-sa CREMA_SERVICE_ACCOUNT_NAME=crema-service-account gcloud iam service-accounts add-iam-policy-binding \ $CONSUMER_SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com \ --member="serviceAccount:$CREMA_SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/iam.serviceAccountUser"Deploy your CREMA service using either
- our pre-built container image in
us-central1-docker.pkg.dev/cloud-run-oss-images/crema-v1/autoscaler - or a container image you build yourself from the source code using Cloud Build (see instructions below).
The command here deploys the service using the pre-built container image; if you want to deploy your own built container image, update the IMAGE variable to specify it.
Configure the variables and the command deploy command:
SERVICE_NAME: The name for your CREMA serviceSERVICE_REGION: The region to run your CREMA service in.CREMA_SERVICE_ACCOUNT_NAME: The name of the service account which will run CREMAPARAMETER_VERSION: The parameter version you created
SERVICE_NAME=my-crema-service SERVICE_REGION=us-central1 CREMA_SERVICE_ACCOUNT_NAME=crema-service-account PARAMETER_VERSION=1 CREMA_CONFIG_PARAM_VERSION=projects/$PROJECT_ID/locations/$PARAMETER_REGION/parameters/$PARAMETER_ID/versions/$PARAMETER_VERSION IMAGE=us-central1-docker.pkg.dev/cloud-run-oss-images/crema-v1/autoscaler:1.0 gcloud beta run deploy $SERVICE_NAME \ --image=${IMAGE} \ --region=${SERVICE_REGION} \ --service-account="${CREMA_SERVICE_ACCOUNT_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \ --no-allow-unauthenticated \ --no-cpu-throttling \ --base-image=us-central1-docker.pkg.dev/serverless-runtimes/google-22/runtimes/java21 \ --labels=created-by=crema \ --set-env-vars="CREMA_CONFIG=${CREMA_CONFIG_PARAM_VERSION},OUTPUT_SCALER_METRICS=False,ENABLE_CLOUD_LOGGING=False"The following environment variables are checked by the container:
CREMA_CONFIG: Required. The fully qualified name (FQN) of the parameter version which contains your CREMA config.OUTPUT_SCALER_METRICS: Optional. If true, CREMA will emit metrics to Cloud Monitoring.ENABLE_CLOUD_LOGGING: Optional. If true, CREMA will log errors to Cloud Logging for improved log searchability.
Note: The OUTPUT_SCALER_METRICS and ENABLE_CLOUD_LOGGING flags are disabled by default as these may incur additional costs. See Cloud Observability Pricing for details.
If you set the OUTPUT_SCALER_METRICS=True environment variable, you'll also have to grant your CREMA service account permission to write metrics:
PROJECT_ID=my-project CREMA_SERVICE_ACCOUNT_NAME=crema-service-account gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:$CREMA_SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/monitoring.metricWriter"If you set the ENABLE_CLOUD_LOGGING=True environment variable, you'll also have to grant your CREMA service account permission to write log entries:
PROJECT_ID=my-project CREMA_SERVICE_ACCOUNT_NAME=crema-service-account gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:$CREMA_SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/logging.logWriter"Use the resource below to verify that your CREMA service is running correctly.
When your CREMA service is running, you should see the following logs in your service's logs each time metrics are refreshed:
Each log message is labeled with the component that emitted it.
[INFO] [METRIC-PROVIDER] Starting metric collection cycle [INFO] [METRIC-PROVIDER] Successfully fetched scaled object metrics ... [INFO] [METRIC-PROVIDER] Sending scale request ... [INFO] [SCALER] Received ScaleRequest ... [INFO] [SCALER] Current instances ... [INFO] [SCALER] Recommended instances ... TIP: Use the following Cloud Logging query for filtering the CREMA service's logs: "[SCALER]" OR "[METRIC-PROVIDER]"
Follow the steps below to build CREMA and make the resulting container image available in Artifact Registry.
Create an Artifact Registry repository to store the CREMA container image if you don't already have one:
PROJECT_ID=my-project CREMA_REPO_NAME=crema AR_REGION=us-central1 gcloud artifacts repositories create "${CREMA_REPO_NAME}" --repository-format=docker --location=$AR_REGION --description="Docker repository for CREMA images"Use Google Cloud Build and the included Dockerfile to build the container image and push it to Artifact Registry. Run the following command from the root of this project:
PROJECT_ID=my-project CREMA_REPO_NAME=crema AR_REGION=us-central1 gcloud builds submit --tag $AR_REGION-docker.pkg.dev/$PROJECT_ID/$CREMA_REPO_NAME/crema:latest .Note that this build process may take 30+ minutes.
If configured, CREMA will emit the following metrics:
custom.googleapis.com/$TRIGGER_TYPE/metric_value: The metric value it received, per trigger typecustom.googleapis.com/recommended_instance_count: The number of instances recommended, per Cloud Run scaled objectcustom.googleapis.com/requested_instance_count: The number of instances requested, per Cloud Run scaled object
As a result, KEDA configuration fields which rely on environment variables, i.e. those with a FromEnv suffix such as usernameFromEnv passwordFromEnv from KEDA's Redis scaler, are not supported.
Many Google Cloud Monitoring metrics have 2+ minute ingestion delay which may affect scaling responsiveness for Google Cloud Platform scalers. See the Google Cloud metrics list for the underlying metrics used by the scaler for latency details.