Introduction
AWS introduced Amazon ElastiCache Serverless in November 2023. However, many developers still rely on the older hosted Redis clusters - which can be a real pain in the back to manage and scale.
In this tutorial, we will set up ElastiCache Serverless with the Valkey engine by using OpenTofu.
Open Source stack focus
This guide is based entirely on an open source stack:
- Valkey instead of Redis. Redis is still technically open source, but due to the license change (SSPL), it is no longer a safe or sustainable option for many applications. Valkey is the community-driven alternative that remains completely open.
- OpenTofu instead of Terraform for the same reason: Terraform is no longer entirely open source due to the transition to the BSL license.
Despite that, you can still use Redis and Terraform if they suit your needs in scope of this guide.
Prerequisites
Before you begin, make sure you have the following set up:
- AWS Account with the sufficient IAM permissions to create ElastiCache resources. If you don't already have it, you can sign up for an AWS Free Tier account.
- AWS CLI installed and configured with valid credentials. You can follow the official AWS CLI setup guide if needed.
- A working directory for your code. In this guide, I’ll use directory
elasticache_guide
Install OpenTofu to your machine
Since we’ll be working with OpenTofu, the first step is installing it.
Instead of installing OpenTofu directly, we’ll use tofuutils/tenv
— a powerful version manager for OpenTofu, Terraform, Terragrunt, Terramate, and Atmos, written in Go.
We're starting with tenv because OpenTofu evolves rapidly, and having a version manager makes it much easier to switch between versions and stay up to date.
To install tenv
follow the instructions in the tenv GitHub README for your operating system. Once tenv
is set up, you can install a specific version of OpenTofu by running:
$ tenv tofu install 1.10.3
That’s it — you now have OpenTofu ready to use!
Create AWS Elasticache OpenTofu module
In this guide, we’ll adhere to Terraform and OpenTofu best practices by creating a dedicated OpenTofu module to provision ElastiCache Serverless.
If you're new to the Terraform modules or want a refresher on best practices, check out Terraform Module Best Practices by DevOpsCube article.
Let’s walk through the process step by step.
Step 1
Create a new directory for the module inside your working folder:
$ mkdir -p elasticache_guide/modules/elasticache_serverless
Step 2
Inside the elasticache_guide/modules/elasticache_serverless
directory, create a file named main.tf
. This file provides a basic specification of the ElastiCache cluster.
resource "aws_elasticache_serverless_cache" "this" { count = var.create ? 1 : 0 engine = var.engine name = var.cache_name dynamic "cache_usage_limits" { for_each = length(var.cache_usage_limits) > 0 ? [var.cache_usage_limits] : [] content { dynamic "data_storage" { for_each = try([cache_usage_limits.value.data_storage], []) content { maximum = try(data_storage.value.maximum, null) minimum = try(data_storage.value.minimum, null) unit = try(data_storage.value.unit, "GB") } } dynamic "ecpu_per_second" { for_each = try([cache_usage_limits.value.ecpu_per_second], []) content { maximum = try(ecpu_per_second.value.maximum, null) minimum = try(ecpu_per_second.value.minimum, null) } } } } daily_snapshot_time = var.daily_snapshot_time description = coalesce(var.description, "Serverless Cache") kms_key_id = var.kms_key_id major_engine_version = var.major_engine_version security_group_ids = var.security_group_ids snapshot_arns_to_restore = var.snapshot_arns_to_restore snapshot_retention_limit = var.snapshot_retention_limit subnet_ids = var.subnet_ids user_group_id = aws_elasticache_user_group.main.id timeouts { create = try(var.timeouts.create, "40m") delete = try(var.timeouts.delete, "80m") update = try(var.timeouts.update, "40m") } tags = var.tags }
Step 3
Create the access.tf
file. Here, we will manage ElastiCache users.
resource "random_string" "auth_user_password" { for_each = { for k, v in var.access_users : k => v if try(v.generate_password, false) } length = 16 special = false override_special = "/@£$" } # # Regenerate auth_user / access_u2g_mapping # locals { access_users = { for k, v in var.access_users : k => { generated_name = format("%s-%s", var.cache_name, k) access_string = v["access_string"] auth_type = v["auth_type"] engine = v["engine"] passwords = try(v["generate_password"], false) ? [random_string.auth_user_password[k].result] : try(v["passwords"], null) } } } resource "aws_elasticache_user" "main" { for_each = local.access_users user_id = each.value["generated_name"] user_name = each.value["generated_name"] access_string = each.value["access_string"] engine = each.value["engine"] authentication_mode { type = each.value["auth_type"] passwords = each.value["passwords"] } tags = var.tags } resource "aws_elasticache_user_group" "main" { user_group_id = var.cache_name engine = var.engine tags = var.tags lifecycle { ignore_changes = [user_ids] } } resource "aws_elasticache_user_group_association" "main" { for_each = local.access_users user_group_id = aws_elasticache_user_group.main.user_group_id user_id = aws_elasticache_user.main[each.key].user_id depends_on = [ aws_elasticache_user.main, aws_elasticache_user_group.main ] }
Pay attention, that when you’re working with AWS ElastiCache Serverless, there are two primary ways to access and manage your cluster:
IAM Authentication: Authenticate connections to ElastiCache for Valkey using AWS IAM identities. This method improves your security posture and simplifies administrative tasks, especially when operating within the AWS ecosystem.
Traditional password-based authentication: Each cache user has a long-lived password that is used directly with the
AUTH
command to authenticate client connections. This method is straightforward but requires careful credential management to maintain security.
Step 4
Create the outputs.tf
file. It’s the default file for OpenTofu module outputs.
output "arn" { description = "The amazon resource name of the serverless cache" value = try(aws_elasticache_serverless_cache.this[0].arn, null) } output "create_time" { description = "Timestamp of when the serverless cache was created" value = try(aws_elasticache_serverless_cache.this[0].create_time, null) } output "endpoint" { description = " Represents the information required for client programs to connect to a cache node" value = try(aws_elasticache_serverless_cache.this[0].endpoint, null) } output "full_engine_version" { description = "The name and version number of the engine the serverless cache is compatible with" value = try(aws_elasticache_serverless_cache.this[0].full_engine_version, null) } output "major_engine_version" { description = "The version number of the engine the serverless cache is compatible with" value = try(aws_elasticache_serverless_cache.this[0].major_engine_version, null) } output "reader_endpoint" { description = "Represents the information required for client programs to connect to a cache node" value = try(aws_elasticache_serverless_cache.this[0].reader_endpoint, null) } output "status" { description = "The current status of the serverless cache. The allowed values are CREATING, AVAILABLE, DELETING, CREATE-FAILED and MODIFYING" value = try(aws_elasticache_serverless_cache.this[0].status, null) } output "access_users" { value = local.access_users }
Step 5
Create the variables.tf
file. It’s the file for OpenTofu module variables, where we pre-defined some default values.
variable "create" { description = "Determines whether serverless resource will be created." type = bool default = true } variable "cache_name" { description = "The name which serves as a unique identifier to the serverless cache." type = string default = null } variable "cache_usage_limits" { description = "Sets the cache usage limits for storage and ElastiCache Processing Units for the cache." type = map(any) default = {} } variable "daily_snapshot_time" { description = "The daily time that snapshots will be created from the new serverless cache. Only supported for engine type `redis`. Defaults to 0." type = string default = null } variable "description" { description = "User-created description for the serverless cache." type = string default = null } variable "engine" { description = "Name of the cache engine to be used for this cache cluster. Valid values are `memcached` or `redis`." type = string default = "redis" } variable "kms_key_id" { description = "ARN of the customer managed key for encrypting the data at rest. If no KMS key is provided, a default service key is used." type = string default = null } variable "major_engine_version" { description = "The version of the cache engine that will be used to create the serverless cache." type = string default = null } variable "security_group_ids" { description = "One or more VPC security groups associated with the serverless cache." type = list(string) default = [] } variable "snapshot_arns_to_restore" { description = "The list of ARN(s) of the snapshot that the new serverless cache will be created from. Available for Redis only." type = list(string) default = null } variable "snapshot_retention_limit" { description = "(Redis only) The number of snapshots that will be retained for the serverless cache that is being created." type = number default = null } variable "subnet_ids" { description = "A list of the identifiers of the subnets where the VPC endpoint for the serverless cache will be deployed." type = list(string) default = [] } variable "timeouts" { description = "Define maximum timeout for creating, updating, and deleting serverless resources." type = map(string) default = {} } variable "tags" { description = "A map of tags to add to all resources" type = map(string) default = {} } # # Access # variable "access_users" { default = {} }
Step 6
Create the versions.tf
file. It’s metadata file with OpenTofu and providers versions.
terraform { required_version = ">= 1.0" required_providers { aws = { source = "hashicorp/aws" version = ">= 5.77" } } }
That’s it - you’ve created your own module for managing AWS ElastiCache Serverless! Now, let’s move on to building the dependent infrastructure that we will use this guide.
Create dependent AWS infrastructure
The AWS ElastiCache module is insufficient for our guidance; to use the cluster, we must construct additional infrastructure such as vpc, subnets, kms, and security groups. Let's build all of it in the elasticache_guide/main.tf
file:
module "vpc" { source = "terraform-aws-modules/vpc/aws" version = "~> 5.0" name = "kvendingoldo-demo-elasticache-serverless" cidr = "10.0.0.0/16" azs = ["us-east-1a", "us-east-1b"] private_subnets = ["10.0.1.0/24", "10.0.2.0/24"] enable_dns_hostnames = true enable_dns_support = true } module "security_groups" { source = "terraform-aws-modules/security-group/aws" version = "~> 5.0" name = "kvendingoldo-demo-elasticache-serverless" description = "Security group for ElastiCache Serverless" vpc_id = module.vpc.vpc_id security_group_rules = { ingress_self = { type = "ingress" from_port = 6379 to_port = 6379 protocol = "tcp" self = true description = "Allow self-traffic" } egress_all = { type = "egress" from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] description = "Allow all egress" } } } module "kms_elasticache_serverless" { source = "terraform-aws-modules/kms/aws" version = "~> 1.0" alias = "kvendingoldo-demo-elasticache-serverless" description = "KMS key for ElastiCache Serverless encryption" enable_key_rotation = true } module "elasticache_serverless" { source = "./modules/elasticache_serverless" engine = "valkey" cache_name = "kvendingoldo-demo-elasticache-serverless" cache_usage_limits = { data_storage = { maximum = 2 # 2Gb } ecpu_per_second = { maximum = 2000 # Approximate for 2 vCPUs (1000 per vCPU) } } daily_snapshot_time = "22:00" description = "Valkey serverless cluster for kvendingoldo demo" kms_key_id = module.kms_elasticache_serverless[0].key_arn major_engine_version = "7" security_group_ids = [ module.security_groups["elasticache_serverless"].id ] subnet_ids = module.vpc.private_subnets snapshot_retention_limit = 7 access_users = { "admin-iam" = { access_string = "on ~* +@all" engine = "valkey" auth_type = "iam" } "admin-pwd" = { access_string = "on ~* +@all" engine = "valkey" auth_type = "password" generate_password = true } } } output "elasticache_serverless_endpoint" { value = module.elasticache_serverless.endpoint } output "elasticache_serverless_users" { value = module.elasticache_serverless.access_users }
As you can see, this file partially uses community modules from the Internet to create the VPC, KMS, and security group. If you like, you can use native AWS Terraform resources instead.
Apply the OpenTofu code
Before applying our AWS resources, we need to initialize OpenTofu. During this step, OpenTofu will create the state file and download all required providers.
To do this, run the following command:
$ tofu init
Next, let's see what OpenTofu plans to do by generating an execution plan using the following command:
$ tofu plan
After reviewing the plan, it’s finally time to apply our configuration using the following command:
$ tofu apply -auto-approve
Once completed, OpenTofu will display the details of the provisioned ElastiCache Valkey instance.
Connect to the cluster
Step 1: Get an endpoint
As mentioned earlier, you’ll see the connection details in the OpenTofu output. Alternatively, you can retrieve the endpoint using the AWS CLI with the following command:
aws elasticache describe-serverless-caches --query "serverlessCaches[?cacheName=='kvendingoldo-demo-elasticache-serverless'].endpoint"
Step 2: Connect by redis-cli
Valkey is compatible with Redis clients. You can use your preferred client to connect to the endpoint on the default Redis port 6379
. In my case, I use redis-cli
:
redis-cli -h <serverless_endpoint> -p 6379 -a <your_password>
Run a simple command to confirm connection: ping
You should get a response: PONG
Cleaning up resources
After the checking that everything works you can destroy the created resources when they are no longer needed to avoid unnecessary costs by the following command
$ tofu destroy -auto-approve
This will remove the ElastiCache instance and all related resources.
Conclusion
In this guide, we walked through creating a dedicated OpenTofu module to provision AWS ElastiCache Serverless with the Valkey engine, following best practices for infrastructure as code. We covered setting up OpenTofu, building reusable modules, and connecting to your Valkey instance.
By leveraging OpenTofu and Valkey, you gain a fully open source and flexible solution for managing serverless caching on AWS, helping you avoid licensing pitfalls and maintain control over your infrastructure.
Feel free to extend the module to fit your needs and explore additional automation opportunities. If you have any questions or want to dive deeper into related topics, just let me know!
Happy caching!
Top comments (0)