This project demonstrates how to scale Google Cloud Run workers based on Redis queue depth using Cloud Run External Metrics Autoscaling.
Google Cloud Run's new Worker Pools allow running background processing tasks separately from web services. This allows running Celery workers in Cloud Run. However, Cloud Run does not provide built-in autoscaling based on external metrics like queue depth in Redis.
Google's CREMA (Cloud Run External Metrics Autoscaler) project fills this gap by providing autoscaling based on external metrics – here, the length of a Redis list.
This project provides a minimal example of a Django application with Celery workers running on Cloud Run, with autoscaling of the workers based on Redis queue depth using CREMA.
core/: Django app containing the task and UI.scaler_test/: Django project settings.Dockerfile: Container definition for both Web and Worker services.terraform/: Terraform configuration for infrastructure including the CREMA config in main.tf.
The application is a simple Django application with a Celery integration.
It provides a web interface to trigger a user-defined number of Celery tasks. The tasks simulate work by sleeping for a specified number of seconds.
The tasks are queued in Redis. The CREMA autoscaler monitors the length of the Redis queue and scales the number of Celery worker instances accordingly.
As described in the CREMA documentation, the autoscaler is configured via a
YAML file. For consistency, the configuration is embedded in the Terraform
configuration in terraform/main.tf.
When using YAML, the configuration looks like this:
"apiVersion": "crema/v1"
"kind": "CremaConfig"
"spec":
"pollingInterval": 30
"scaledObjects":
- "spec":
"maxReplicaCount": 10
"minReplicaCount": 0
"scaleTargetRef":
"name": "projects/<YOUR_PROJECT_NAME>/locations/<YOUR_LOCATION>/workerpools/<YOUR_WORKER_POOL_NAME>"
"triggers":
- "metadata":
"address": "<YOUR_REDIS_IP>:6379"
"listLength": "5"
"listName": "celery"
"name": "redis-trigger"
"type": "redis"Note that the configuration for a Redis queue is simpler than in the full CREMA documentation:
- A
triggerAuthenticationssection is not needed because Redis does not require authentication in our setup. - The configuration in the
triggerssection is adopted from the KEDA Redis List Length scaler documentation. - The
listNameis set tocelery, which is the default Redis list name used by Celery.
This example includes a complete Terraform configuration to deploy the application on Google Cloud Run with CREMA autoscaling.
You can deploy the application on Google Cloud Run using Terraform. Note that Cloud Build is not implemented and you need to build and push the Docker image manually before applying the Terraform configuration.
- A Google Cloud Project
- Terraform installed
gcloudauthenticated and configured for your project- Google Container Registry (GCR) and Cloud Build API enabled
- A repository created in GCR to store the Docker image
- Docker image built and pushed to Google Container Registry (GCR)
A simple way to build and push the Docker image is using gcloud builds submit:
gcloud builds submit --tag "<YOUR_REGION>-docker.pkg.dev/<YOUR_PROJECT_ID>/<YOUR_GCR_REPO>/scaler-test"cd terraform
terraform initThe Terraform configuration requires the variables listed in variables.tf.
Create a vars.tfvars file with the following content, replacing the
placeholders with your values:
project_id = "<YOUR_GCP_PROJECT_ID>"
region = "<YOUR_GCP_REGION>"
application_image = "<YOUR_IMAGE_TAG_ON_GCR>" # See gcloud builds submit command above
crema_image = "us-central1-docker.pkg.dev/cloud-run-oss-images/crema-v1/autoscaler:1.0"To create the infrastructure in your Google Cloud Project, run:
terraform apply --var-file="vars.tfvars"You can run the application without CREMA locally using Docker Compose.
docker-compose up --build