DEV Community

Cover image for How to Build an Event-Driven Outbox Pattern with AWS, Terraform and LocalStack
Axel Dlv for AWS Community Builders

Posted on

How to Build an Event-Driven Outbox Pattern with AWS, Terraform and LocalStack

Introduction

When working with microservices, guaranteeing consistency between a database write and an event emission (like an SQS message) can be tricky. This is especially true in distributed systems, where network failures and retries can easily cause duplicates or loss.

The Outbox Pattern offers a robust solution. And in this article, we’ll build it from scratch using:

  • Terraform for infrastructure-as-code
  • LocalStack for running AWS services locally
  • AWS services: DynamoDB, Lambda, SQS, IAM

We’ll walk through setting up the architecture, defining the Terraform modules, and testing the event-driven system locally.

You can find the complete code and project structure on GitHub: HERE

What Is the Outbox Pattern ?

The Outbox Pattern avoids the dual-write problem by splitting responsibilities:

1. Application writes business data and an event into the same database transaction.
2. A separate process (e.g., a Lambda) reads the orders.outbox table and pushes the event to an external system (e.g., SQS).

This guarantees durability and avoids losing messages during partial failures.

Architecture Overview

outbox-pattern-architecture

  • Two DynamoDB tables: orders and orders.outbox
  • Stream enabled on orders.outbox table
  • Lambda triggered by stream, sends events to an SQS FIFO queue

Terraform Modules Breakdown

1. Provider & Root Configuration

We configure Terraform to connect AWS services to LocalStack:

provider "aws" { region = local.region s3_use_path_style = true skip_credentials_validation = true skip_metadata_api_check = true endpoints { cloudformation = local.localstack_endpoint cloudwatch = local.localstack_endpoint dynamodb = local.localstack_endpoint iam = local.localstack_endpoint lambda = local.localstack_endpoint s3 = local.localstack_s3_endpoint sqs = local.localstack_endpoint } } module "iam" { source = "./modules/iam" common_tags = local.common_tags } module "dynamodb" { source = "./modules/dynamodb" common_tags = local.common_tags } module "lambda" { source = "./modules/lambda" lambda_execution_role = module.iam.lambda_execution_role localstack_endpoint = local.localstack_endpoint dynamodb_outbox_stream_arn = module.dynamodb.dynamodb_outbox_stream_arn dynamodb_outbox_arn = module.dynamodb.dynamodb_outbox_arn dynamodb_orders_arn = module.dynamodb.dynamodb_orders_arn dynamodb_outbox_name = module.dynamodb.dynamodb_outbox_name dynamodb_orders_name = module.dynamodb.dynamodb_orders_name sync_queue_url = module.sqs.sync_queue_fifo_url common_tags = local.common_tags region = local.region } module "sqs" { source = "./modules/sqs" common_tags = local.common_tags } 
Enter fullscreen mode Exit fullscreen mode

This setup ensures every AWS service call is redirected to LocalStack.

2. IAM Module

Sets up the necessary IAM roles and permissions for Lambda to interact with other AWS services.

resource "aws_iam_role" "lambda_execution_role" { name = "lambda-execution-role" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "lambda.amazonaws.com" }, "Effect": "Allow", "Sid": "" }] }  EOF  tags = merge(var.common_tags, { Name = "lambda-execution-role" }) } resource "aws_iam_role_policy" "lambda_policy" { name = "lambda-dynamodb-policy" role = aws_iam_role.lambda_execution_role.name policy = jsonencode({ Version = "2012-10-17", Statement = [ # DynamoDB permissions { Effect = "Allow", Action = [ "dynamodb:PutItem", "dynamodb:GetItem", "dynamodb:UpdateItem", "dynamodb:DeleteItem", ], Resource = "*" }, # CloudWatch Logs permissions (for logging) { Effect = "Allow", Action = [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], Resource = "*" } ] }) } 
Enter fullscreen mode Exit fullscreen mode

3. DynamoDB Module

We define two DynamoDB tables:

  • orders — the main business table.
  • orders.outbox — stores events related to changes in the orders table, with DynamoDB Streams enabled to capture changes.
resource "aws_dynamodb_table" "orders" { name = "orders" billing_mode = "PAY_PER_REQUEST" hash_key = "userId" range_key = "orderId" attribute { name = "userId" type = "S" } attribute { name = "orderId" type = "S" } tags = merge(var.common_tags, { name = "dynamodb_orders_table" }) } resource "aws_dynamodb_table" "outbox" { name = "orders.outbox" billing_mode = "PAY_PER_REQUEST" stream_enabled = true stream_view_type = "NEW_IMAGE" hash_key = "orderId" range_key = "eventId" attribute { name = "orderId" type = "S" } attribute { name = "eventId" type = "S" } tags = merge(var.common_tags, { Name = "dynamodb_outbox_table" }) } 
Enter fullscreen mode Exit fullscreen mode

4. SQS Module

Creates an SQS FIFO queue used to dispatch events from the outbox to other services.

resource "aws_sqs_queue" "sync_queue_fifo" { name = "sync-queue.fifo" fifo_queue = true content_based_deduplication = true redrive_policy = jsonencode({ deadLetterTargetArn = aws_sqs_queue.sync_queue_dlq_fifo.arn, maxReceiveCount = 3 }) tags = merge(var.common_tags, { Name = "sync_queue_fifo" }) } resource "aws_sqs_queue" "sync_queue_dlq_fifo" { name = "sync-queue-dlq.fifo" fifo_queue = true content_based_deduplication = true tags = merge(var.common_tags, { Name = "sync_queue_dlq_fifo" }) } 
Enter fullscreen mode Exit fullscreen mode

5. Lambda Module

The Lambda function acts as the outbox processor:

  • Triggers on the DynamoDB stream from the outbox table.
  • Processes events and forwards them to the SQS queue.
resource "aws_lambda_function" "sync_event_lambda" { function_name = "syncEventLambda" handler = "sync-event.lambda_handler" runtime = "python3.12" filename = "${path.root}/app/sync-event.zip" role = var.lambda_execution_role source_code_hash = var.force_lambda_update ? filebase64sha256("${path.root}/app/sync-event.zip") : "" # Forcer la mise à jour du code de la fonction Lambda environment { variables = { DYNAMODB_ENDPOINT = var.localstack_endpoint OUTBOX_TABLE = var.dynamodb_outbox_name SQS_QUEUE_URL = var.sync_queue_url SQS_ENDPOINT = var.localstack_endpoint REGION = var.region AWS_ACCESS_KEY_ID = "test" # For localstack AWS_SECRET_ACCESS_KEY = "test" # For localstack } } tags = merge(var.common_tags, { Name = "sync_event_lambda" }) } resource "aws_lambda_function" "outbox_lambda" { function_name = "outboxProcessingLambda" handler = "outbox-lambda.lambda_handler" runtime = "python3.12" filename = "${path.root}/app/outbox-lambda.zip" role = var.lambda_execution_role source_code_hash = var.force_lambda_update ? filebase64sha256("${path.root}/app/outbox-lambda.zip") : "" # Forcer la mise à jour du code de la fonction Lambda environment { variables = { ORDERS_TABLE = var.dynamodb_orders_name OUTBOX_TABLE = var.dynamodb_outbox_name DYNAMODB_ENDPOINT = var.localstack_endpoint REGION = var.region AWS_ACCESS_KEY_ID = "test" # For localstack AWS_SECRET_ACCESS_KEY = "test" # For localstack  } } tags = merge(var.common_tags, { Name = "outbox_lambda" }) } resource "aws_lambda_event_source_mapping" "trigger_outbox" { event_source_arn = var.dynamodb_outbox_stream_arn function_name = aws_lambda_function.sync_event_lambda.arn starting_position = "LATEST" batch_size = 10 filter_criteria { filter { pattern = jsonencode({ dynamodb = { NewImage = { status = { S = ["PENDING"] } } } }) } } tags = merge(var.common_tags, { Name = "trigger_outbox" }) } 
Enter fullscreen mode Exit fullscreen mode

Lambda Function Code Overview

Make sure to zip both Python scripts before uploading.

To complement the infrastructure setup, here’s a high-level look at the Lambda function implementations in Python.

outbox-lambda.py
This Lambda inserts business order data and an outbox event atomically into DynamoDB tables. It is using the same transaction.
It generates a unique event ID and marks the event status as "PENDING".

def insert_order_and_outbox(data, timestamp): response = dynamodb_client.transact_write_items( TransactItems=[ { 'Put': { 'TableName': ORDERS_TABLE, 'Item': { 'userId': {'S': data["userId"]}, 'orderId': {'S': data["orderId"]}, 'courierId': {'S': data.get("courierId")}, 'notificationId': {'S': data.get("notificationId")}, 'message': {'S': data.get("message")}, 'createdAt': {'S': str(timestamp)} } } }, { 'Put': { 'TableName': OUTBOX_TABLE, 'Item': { 'orderId': {'S': data["orderId"]}, 'eventId': {'S': str(uuid4())}, 'eventType': {'S': data["eventType"]}, 'eventTimestamp': {'S': str(timestamp)}, 'status': {'S': "PENDING"}, 'payload': {'S': json.dumps(data)} } } } ] ) return response 
Enter fullscreen mode Exit fullscreen mode

This approach ensures the event is durably stored alongside the business data in the same transaction boundary.

sync-event.py
Triggered by DynamoDB Streams on the outbox table, this Lambda:

  • Deserializes new records with status "PENDING".
  • Sends the event payload to an SQS FIFO queue with retry and exponential backoff.
  • Upon successful send, updates the outbox event’s status to "SENT" to avoid duplicate processing.
def send_message_to_sqs_with_retry(queue_url: str, message: dict, attributes: dict, max_retries=3, base_delay=0.5) -> str: for attempt in range(max_retries + 1): try: response = sqs.send_message( QueueUrl=queue_url, MessageBody=json.dumps(message), MessageGroupId="outbox-event", MessageAttributes=attributes ) status_code = response['ResponseMetadata']['HTTPStatusCode'] message_id = response.get("MessageId") if status_code == 200 and message_id: logger.info(f"SQS message sent successfully. MessageId: {message_id}") return message_id else: raise Exception(f"Unexpected SQS response: {response}") ... def lambda_handler(event, context): ... # Send to SQS with validation and retry message_id = send_message_to_sqs_with_retry( queue_url=SQS_QUEUE_URL, message=message, attributes={ 'eventType': { 'DataType': 'String', 'StringValue': event_type } } ) # Update DynamoDB status only after successful send table = dynamodb.Table(OUTBOX_TABLE) table.update_item( Key={ 'orderId': order_id, 'eventId': event_id }, UpdateExpression='SET #s = :sent', ConditionExpression='#s = :pending', ExpressionAttributeNames={'#s': 'status'}, ExpressionAttributeValues={ ':sent': 'SENT', ':pending': 'PENDING' } ) ... 
Enter fullscreen mode Exit fullscreen mode

This design guarantees reliable event dispatch with strong delivery semantics.

For full code, visit the GitHub repository

Testing the Flow Locally

1. Start LocalStack:

localstack start 
Enter fullscreen mode Exit fullscreen mode

2. Deploy the infrastructure:

terraform init terraform apply 
Enter fullscreen mode Exit fullscreen mode

Stack-Overview

3. Invoke the outboxProcessingLambda manually to simulate an event:
First, you need to install awscli-local.

awslocal lambda invoke \ --function-name outboxProcessingLambda \ --region eu-west-1 \ --payload '{ "userId": "user-456", "notificationId": "notif-9999", "orderId": "12345", "courierId": "courier-789", "eventType": "Delivered", "eventTimestamp": "2025-04-23T13:45:00Z", "message": "Your order #12345 has been delivered by courier-789." }' \ --cli-binary-format raw-in-base64-out \ output.json && cat output.json 
Enter fullscreen mode Exit fullscreen mode

5. Check DynamoDB (via LocalStack UI or CLI) to verify that the event has been inserted in the orders and orders.outbox database.

orders table

dynamodb-orders

orders.outbox table

dynamodb-orders-outbox-a

dynamodb-orders-outbox

6. Check SQS messages (via LocalStack UI or CLI) to verify that the event was propagated.

SQS-localstack

Advantages of This Setup

  • LocalStack lets you test cloud-native apps without incurring AWS costs.
  • Using Terraform allows you to reuse and version infrastructure code easily.
  • The Outbox Pattern ensures data and event integrity in distributed systems.

Conclusion

Combining the Outbox Pattern with Terraform and LocalStack gives you a powerful, reliable, and testable approach for building event-driven microservices. Whether you're experimenting locally or preparing for production-scale architecture, this pattern helps ensure strong delivery guarantees and system resiliency.

You can find the complete code and project structure on GitHub: HERE

Top comments (7)

Collapse
 
werliton profile image
Werliton Silva

HUmm! Interesting. Thanks

Collapse
 
axeldlv profile image
Axel Dlv AWS Community Builders

Enjoy !

Collapse
 
emil profile image
Emil

Well I think you must mention that outbox item and table item have to be written in one transaction in order to make outbox pattern useful. Otherwise your initial item can fail and the event still be send and vice versa. This is what a transactional outbox should solve. Therefore use a transaction. Your article describes it in a wrong way

Collapse
 
axeldlv profile image
Axel Dlv AWS Community Builders

Hello,
This is what I mentioned at the beginning: "The application writes business data and an event within the same database transaction."
It is indeed necessary that both writes are performed within the same transaction.

Collapse
 
emil profile image
Emil • Edited

But the code example does not show it. It shows two put_item operations.

I can see you updated it. Nice!! Thank you

Thread Thread
 
axeldlv profile image
Axel Dlv AWS Community Builders • Edited

Yes indeed.

I have changed the transaction earlier.

Thank for your feedback.

Collapse
 
jessefarinacci profile image
Jesse

Overengineered rube goldberg. Thanks for the laugh.