The Security Wake-Up Call
Your containerized applications are under scrutiny. Security scans have revealed critical issues:
- Base images with known vulnerabilities (CVE-2023-XXXX, CVE-2024-XXXX)
- Runtime secrets exposed in image layers (API keys, passwords visible in image history)
- No image signing or verification (anyone can push malicious images)
- Compliance gaps for financial services (PCI-DSS, SOC 2 requirements not met)
This isn't just about fixing a few Dockerfiles. You need to secure the entire container lifecycle: build, registry, deployment, and runtime.
In this article, I'll walk through a comprehensive container security strategy on AWS that addresses vulnerabilities, implements secrets management, enables image signing, enforces runtime security, and automates compliance.
Container Security Lifecycle
The Four Pillars
┌─────────────┐ ┌──────────────┐ ┌─────────────┐ ┌──────────────┐ │ BUILD │ → │ REGISTRY │ → │ DEPLOYMENT │ → │ RUNTIME │ │ │ │ │ │ │ │ │ │ Secure base │ │ Image scan │ │ Policy │ │ Least │ │ Multi-stage │ │ Signing │ │ validation │ │ privilege │ │ No secrets │ │ Access │ │ RBAC │ │ Isolation │ └─────────────┘ └──────────────┘ └─────────────┘ └──────────────┘ Phase 1: Secure Base Images and Minimal Attack Surface
The Problem: Vulnerable Base Images
Many Dockerfiles start with:
FROM ubuntu:latest # or FROM node:latest These base images often contain:
- Unnecessary packages and tools
- Known vulnerabilities
- Large attack surface
- Outdated packages
Solution: Minimal, Trusted Base Images
Option 1: Use Distroless Images
# Before: Vulnerable base image FROM node:18 COPY . . RUN npm install CMD ["node", "app.js"] # After: Distroless base image FROM node:18-slim AS builder WORKDIR /app COPY package*.json ./ RUN npm ci --only=production FROM gcr.io/distroless/nodejs18-debian11 WORKDIR /app COPY --from=builder /app/node_modules ./node_modules COPY --from=builder /app . USER nonroot:nonroot CMD ["app.js"] Benefits:
- No shell, no package manager (reduces attack surface)
- Minimal OS footprint
- Only runtime dependencies
Option 2: Alpine Linux
FROM alpine:3.18 AS builder RUN apk add --no-cache nodejs npm WORKDIR /app COPY package*.json ./ RUN npm ci --only=production FROM alpine:3.18 RUN apk add --no-cache nodejs WORKDIR /app COPY --from=builder /app/node_modules ./node_modules COPY --from=builder /app . RUN addgroup -g 1001 -S nodejs && \ adduser -S nodejs -u 1001 USER nodejs:nodejs CMD ["node", "app.js"] Option 3: AWS-Optimized Base Images
# Use AWS-provided base images FROM public.ecr.aws/lambda/nodejs:18 # or FROM public.ecr.aws/docker/library/node:18-slim Multi-Stage Build Best Practices
Secure Multi-Stage Build:
# Stage 1: Build dependencies FROM node:18-slim AS deps WORKDIR /app # Copy only package files first (better layer caching) COPY package*.json ./ # Install dependencies with security flags RUN npm ci --only=production --ignore-scripts && \ npm audit --audit-level=moderate && \ rm -rf /tmp/* /var/tmp/* # Stage 2: Build application FROM node:18-slim AS builder WORKDIR /app COPY --from=deps /app/node_modules ./node_modules COPY . . # Build with security hardening RUN npm run build && \ npm prune --production && \ rm -rf /tmp/* /var/tmp/* # Stage 3: Runtime (minimal) FROM gcr.io/distroless/nodejs18-debian11:nonroot WORKDIR /app # Copy only production artifacts COPY --from=builder --chown=nonroot:nonroot /app/dist ./dist COPY --from=builder --chown=nonroot:nonroot /app/node_modules ./node_modules COPY --from=builder --chown=nonroot:nonroot /app/package.json ./ # No secrets, no build tools, no shell EXPOSE 8080 CMD ["dist/index.js"] Base Image Vulnerability Scanning
Pre-Build Base Image Check:
#!/bin/bash # check-base-image.sh BASE_IMAGE=$1 if [ -z "$BASE_IMAGE" ]; then echo "Usage: $0 <base-image>" exit 1 fi echo "Scanning base image: $BASE_IMAGE" # Use Trivy to scan base image trivy image --severity HIGH,CRITICAL --exit-code 1 "$BASE_IMAGE" if [ $? -eq 0 ]; then echo "Base image passed security scan" exit 0 else echo "Base image has critical vulnerabilities" exit 1 fi Integrate into CI/CD:
# .github/workflows/docker-build.yml name: Build and Scan on: push: branches: [main] jobs: check-base-image: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Check base image run: | BASE_IMAGE=$(grep '^FROM' Dockerfile | head -1 | awk '{print $2}') ./check-base-image.sh "$BASE_IMAGE" build: needs: check-base-image runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Build Docker image run: docker build -t payment-app:latest . Automated Base Image Updates
Dependabot for Docker:
# .github/dependabot.yml version: 2 updates: - package-ecosystem: "docker" directory: "/" schedule: interval: "weekly" open-pull-requests-limit: 10 Phase 2: Secrets Management (Not in Images)
The Problem: Secrets in Image Layers
Common Anti-Pattern:
# NEVER DO THIS FROM node:18 ENV DB_PASSWORD=mysecretpassword123 ENV API_KEY=sk_live_1234567890 COPY . . RUN npm install CMD ["node", "app.js"] Why This is Dangerous:
- Secrets are visible in image history:
docker history <image> - Secrets are in image layers (even if removed in later layers)
- Anyone with image access can extract secrets
- Secrets are in version control (if Dockerfile is committed)
Solution: AWS Secrets Manager Integration
Secure Dockerfile:
FROM node:18-slim AS builder WORKDIR /app COPY package*.json ./ RUN npm ci --only=production FROM gcr.io/distroless/nodejs18-debian11:nonroot WORKDIR /app COPY --from=builder /app/node_modules ./node_modules COPY --from=builder /app . # No secrets in image! # Secrets will be injected at runtime via environment variables # or mounted volumes from AWS Secrets Manager EXPOSE 8080 CMD ["app.js"] Application Code - Fetching Secrets:
// app.js const AWS = require('aws-sdk'); const secretsManager = new AWS.SecretsManager({ region: process.env.AWS_REGION }); async function getSecret(secretName) { try { const data = await secretsManager.getSecretValue({ SecretId: secretName }).promise(); return JSON.parse(data.SecretString); } catch (error) { console.error(`Error retrieving secret ${secretName}:`, error); throw error; } } // Fetch secrets at startup let dbCredentials; async function initializeSecrets() { dbCredentials = await getSecret('payment-app/database/credentials'); process.env.DB_HOST = dbCredentials.host; process.env.DB_USER = dbCredentials.username; process.env.DB_PASSWORD = dbCredentials.password; } initializeSecrets().then(() => { // Start application app.listen(8080); }); ECS Task Definition with Secrets:
{ "family": "payment-app", "containerDefinitions": [ { "name": "payment-app", "image": "123456789.dkr.ecr.us-east-1.amazonaws.com/payment-app:latest", "secrets": [ { "name": "DB_PASSWORD", "valueFrom": "arn:aws:secretsmanager:us-east-1:123456789:secret:payment-app/database/credentials:password::" }, { "name": "API_KEY", "valueFrom": "arn:aws:secretsmanager:us-east-1:123456789:secret:payment-app/api-key::" } ], "environment": [ { "name": "AWS_REGION", "value": "us-east-1" } ] } ], "taskRoleArn": "arn:aws:iam::123456789:role/ecs-task-role", "executionRoleArn": "arn:aws:iam::123456789:role/ecs-execution-role" } IAM Role for Secrets Access:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret" ], "Resource": [ "arn:aws:secretsmanager:us-east-1:123456789:secret:payment-app/*" ] }, { "Effect": "Allow", "Action": [ "kms:Decrypt" ], "Resource": [ "arn:aws:kms:us-east-1:123456789:key/secrets-key-id" ], "Condition": { "StringEquals": { "kms:ViaService": "secretsmanager.us-east-1.amazonaws.com" } } } ] } Kubernetes Secrets (If Using EKS)
External Secrets Operator:
# external-secret.yaml apiVersion: external-secrets.io/v1beta1 kind: ExternalSecret metadata: name: payment-app-secrets spec: refreshInterval: 1h secretStoreRef: name: aws-secrets-manager kind: SecretStore target: name: payment-app-secrets creationPolicy: Owner data: - secretKey: db-password remoteRef: key: payment-app/database/credentials property: password - secretKey: api-key remoteRef: key: payment-app/api-key Pre-Commit Hook to Detect Secrets
#!/usr/bin/env python3 # .git/hooks/pre-commit import subprocess import re import sys def detect_secrets_in_dockerfile(): """Detect secrets in Dockerfile before commit""" patterns = [ (r'ENV\s+\w*PASSWORD\s*=\s*["\']([^"\']+)["\']', 'Password in ENV'), (r'ENV\s+\w*SECRET\s*=\s*["\']([^"\']+)["\']', 'Secret in ENV'), (r'ENV\s+\w*KEY\s*=\s*["\']([^"\']+)["\']', 'Key in ENV'), (r'--password\s+["\']([^"\']+)["\']', 'Password in command'), (r'apikey["\']?\s*[:=]\s*["\']([^"\']+)["\']', 'API key detected'), ] try: with open('Dockerfile', 'r') as f: content = f.read() violations = [] for pattern, message in patterns: matches = re.findall(pattern, content, re.IGNORECASE) if matches: violations.append(f"{message}: {len(matches)} found") if violations: print("❌ SECURITY VIOLATION: Secrets detected in Dockerfile!") for violation in violations: print(f" - {violation}") print("\nUse AWS Secrets Manager instead.") print("See: https://docs.aws.amazon.com/secretsmanager/") sys.exit(1) print("✅ No secrets detected in Dockerfile") return 0 except FileNotFoundError: return 0 if __name__ == '__main__': sys.exit(detect_secrets_in_dockerfile()) Phase 3: Image Scanning and Signing
Amazon ECR Image Scanning
Enable Automatic Scanning:
# Enable automatic scanning on push aws ecr put-image-scanning-configuration \ --repository-name payment-app \ --image-scanning-configuration scanOnPush=true # Scan existing images aws ecr start-image-scan \ --repository-name payment-app \ --image-id imageTag=latest Get Scan Results:
# Get scan findings aws ecr describe-image-scan-findings \ --repository-name payment-app \ --image-id imageTag=latest \ --query 'imageScanFindings' \ --output json Fail Build on Critical Vulnerabilities:
# check-ecr-scan-results.py import boto3 import sys import json ecr = boto3.client('ecr') def check_scan_results(repo_name, image_tag): """Check ECR scan results and fail if critical issues found""" response = ecr.describe-image-scan-findings( repositoryName=repo_name, imageId={'imageTag': image_tag} ) findings = response.get('imageScanFindings', {}) finding_counts = findings.get('findingCounts', {}) critical_count = finding_counts.get('CRITICAL', 0) high_count = finding_counts.get('HIGH', 0) print(f"Scan Results:") print(f" Critical: {critical_count}") print(f" High: {high_count}") print(f" Medium: {finding_counts.get('MEDIUM', 0)}") print(f" Low: {finding_counts.get('LOW', 0)}") # Fail if critical or too many high severity if critical_count > 0: print(f"❌ Build failed: {critical_count} CRITICAL vulnerabilities found") sys.exit(1) if high_count > 5: print(f"❌ Build failed: {high_count} HIGH vulnerabilities found (max 5 allowed)") sys.exit(1) print("✅ Image scan passed") return 0 if __name__ == '__main__': if len(sys.argv) < 3: print("Usage: python check-ecr-scan-results.py <repo-name> <image-tag>") sys.exit(1) sys.exit(check_scan_results(sys.argv[1], sys.argv[2])) Integrate into CI/CD:
# buildspec.yml version: 0.2 phases: pre_build: commands: - echo Logging in to Amazon ECR... - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com - REPOSITORY_URI=$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7) - IMAGE_TAG=${COMMIT_HASH:=latest} build: commands: - echo Building Docker image... - docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG . - docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $REPOSITORY_URI:$IMAGE_TAG post_build: commands: - echo Pushing Docker image... - docker push $REPOSITORY_URI:$IMAGE_TAG - echo Waiting for image scan to complete... - | aws ecr wait image-scan-complete \ --repository-name $IMAGE_REPO_NAME \ --image-id imageTag=$IMAGE_TAG \ --max-attempts 30 \ --delay 10 - echo Checking scan results... - | python check-ecr-scan-results.py $IMAGE_REPO_NAME $IMAGE_TAG - echo Image scan passed, proceeding with deployment Image Signing with AWS Signer
Create Signing Profile:
# Create signing profile aws signer put-signing-profile \ --profile-name payment-app-signing-profile \ --platform-id Notation-OCI-SHA384-ECDSA \ --signing-material certificateArn=arn:aws:acm:region:account:certificate/cert-id # Get signing profile aws signer get-signing-profile \ --profile-name payment-app-signing-profile Sign Image:
# Sign image after push aws signer start-signing-job \ --source '{ "s3": { "bucketName": "payment-app-artifacts", "key": "payment-app-latest.tar" } }' \ --destination '{ "s3": { "bucketName": "payment-app-artifacts", "prefix": "signed/" } }' \ --profile-name payment-app-signing-profile Verify Image Signature:
# Install notation CLI # https://notaryproject.dev/docs/installation/ # Verify signature notation verify \ --certificate-file certificate.pem \ 123456789.dkr.ecr.us-east-1.amazonaws.com/payment-app:latest Enforce Signature Verification in ECS:
import boto3 ecs = boto3.client('ecs') def verify_image_signature(image_uri): """Verify image signature before deployment""" # Extract image details # Check signature using notation or AWS Signer API # If signature invalid, reject deployment pass def create_service_with_signature_check(task_definition): """Create ECS service only if image is signed""" # Verify signature first image_uri = task_definition['containerDefinitions'][0]['image'] if not verify_image_signature(image_uri): raise ValueError(f"Image {image_uri} is not signed or signature invalid") # Create service ecs.create_service( cluster='payment-cluster', serviceName='payment-service', taskDefinition=task_definition['family'], desiredCount=2 ) Third-Party Scanning Tools
Trivy Integration:
# Install Trivy wget https://github.com/aquasecurity/trivy/releases/download/v0.45.0/trivy_0.45.0_Linux-64bit.tar.gz tar -xzf trivy_0.45.0_Linux-64bit.tar.gz # Scan image trivy image --severity HIGH,CRITICAL \ --format json \ --output trivy-results.json \ 123456789.dkr.ecr.us-east-1.amazonaws.com/payment-app:latest # Fail build on critical findings trivy image --exit-code 1 --severity CRITICAL \ 123456789.dkr.ecr.us-east-1.amazonaws.com/payment-app:latest Snyk Integration:
# Install Snyk npm install -g snyk # Authenticate snyk auth $SNYK_TOKEN # Scan Docker image snyk container test 123456789.dkr.ecr.us-east-1.amazonaws.com/payment-app:latest \ --severity-threshold=high \ --json > snyk-results.json Phase 4: Runtime Security (Least Privilege)
ECS Task Security Configuration
Non-Root User:
{ "containerDefinitions": [ { "name": "payment-app", "image": "123456789.dkr.ecr.us-east-1.amazonaws.com/payment-app:latest", "user": "1001:1001", "readonlyRootFilesystem": true, "privileged": false, "linuxParameters": { "capabilities": { "drop": ["ALL"], "add": ["NET_BIND_SERVICE"] } } } ] } Resource Limits:
{ "containerDefinitions": [ { "name": "payment-app", "memory": 512, "memoryReservation": 256, "cpu": 256, "ulimits": [ { "name": "nofile", "softLimit": 1024, "hardLimit": 2048 } ] } ] } EKS Pod Security Standards
Pod Security Policy:
apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: payment-app-psp spec: privileged: false allowPrivilegeEscalation: false requiredDropCapabilities: - ALL volumes: - 'configMap' - 'secret' - 'emptyDir' runAsUser: rule: 'MustRunAsNonRoot' seLinux: rule: 'RunAsAny' fsGroup: rule: 'RunAsAny' readOnlyRootFilesystem: true Security Context:
apiVersion: apps/v1 kind: Deployment metadata: name: payment-app spec: template: spec: securityContext: runAsNonRoot: true runAsUser: 1001 fsGroup: 1001 containers: - name: payment-app image: 123456789.dkr.ecr.us-east-1.amazonaws.com/payment-app:latest securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true capabilities: drop: - ALL add: - NET_BIND_SERVICE resources: limits: memory: "512Mi" cpu: "500m" requests: memory: "256Mi" cpu: "250m" Network Policies (EKS)
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: payment-app-netpol spec: podSelector: matchLabels: app: payment-app policyTypes: - Ingress - Egress ingress: - from: - podSelector: matchLabels: app: nginx-ingress ports: - protocol: TCP port: 8080 egress: - to: - podSelector: matchLabels: app: database ports: - protocol: TCP port: 5432 - to: - namespaceSelector: matchLabels: name: kube-system ports: - protocol: UDP port: 53 AWS App Mesh for Service Isolation
apiVersion: appmesh.k8s.aws/v1beta2 kind: VirtualNode metadata: name: payment-app spec: podSelector: matchLabels: app: payment-app listeners: - portMapping: port: 8080 protocol: http serviceDiscovery: dns: hostname: payment-app.payment.svc.cluster.local backends: - virtualService: virtualServiceName: database.payment.svc.cluster.local Runtime Security Monitoring
Amazon Inspector for Runtime Scanning:
# Enable Inspector for ECS aws inspector2 enable \ --resource-types EC2 \ --account-ids 123456789 # Create assessment target aws inspector2 create-assessment-target \ --name payment-app-ecs \ --resource-group-arn arn:aws:resource-groups:us-east-1:123456789:group/payment-app CloudWatch Container Insights:
# Enable Container Insights for ECS aws ecs update-cluster \ --cluster payment-cluster \ --settings name=containerInsights,value=enabled Phase 5: Compliance Automation
AWS Config Rules for Container Compliance
ECR Compliance Rule:
import boto3 import json def evaluate_ecr_compliance(configuration_item): """Evaluate ECR repository compliance""" compliance_status = 'COMPLIANT' annotation = '' # Check if image scanning is enabled if not configuration_item.get('configuration', {}).get('imageScanningConfiguration', {}).get('scanOnPush', False): compliance_status = 'NON_COMPLIANT' annotation = 'ECR image scanning must be enabled for PCI-DSS compliance' # Check if encryption is enabled if not configuration_item.get('configuration', {}).get('encryptionConfiguration', {}).get('encryptionType') == 'AES256': compliance_status = 'NON_COMPLIANT' annotation = 'ECR encryption must be enabled' return { 'compliance_type': compliance_status, 'annotation': annotation } def lambda_handler(event, context): """Lambda handler for Config custom rule""" config = boto3.client('config') configuration_item = json.loads(event['invokingEvent'])['configurationItem'] evaluation = evaluate_ecr_compliance(configuration_item) config.put_evaluations( Evaluations=[{ 'ComplianceResourceType': configuration_item['resourceType'], 'ComplianceResourceId': configuration_item['resourceId'], 'ComplianceType': evaluation['compliance_type'], 'Annotation': evaluation['annotation'], 'OrderingTimestamp': configuration_item['configurationItemCaptureTime'] }] ) return evaluation ECS Task Definition Compliance:
def evaluate_ecs_task_compliance(configuration_item): """Evaluate ECS task definition compliance""" compliance_status = 'COMPLIANT' violations = [] config = configuration_item.get('configuration', {}) containers = config.get('containerDefinitions', []) for container in containers: # Check if running as root if container.get('user') is None or container.get('user') == 'root': violations.append('Container must not run as root user') # Check if readonly root filesystem if not container.get('readonlyRootFilesystem', False): violations.append('Container must have readonly root filesystem') # Check if privileged if container.get('privileged', False): violations.append('Container must not run in privileged mode') # Check capabilities linux_params = container.get('linuxParameters', {}) capabilities = linux_params.get('capabilities', {}) if 'ALL' not in capabilities.get('drop', []): violations.append('Container must drop ALL capabilities') if violations: compliance_status = 'NON_COMPLIANT' annotation = '; '.join(violations) else: annotation = 'Task definition meets security requirements' return { 'compliance_type': compliance_status, 'annotation': annotation } Automated Compliance Reporting
import boto3 from datetime import datetime config = boto3.client('config') s3 = boto3.client('s3') def generate_compliance_report(): """Generate container compliance report""" # Get compliance summary response = config.get_compliance_summary_by_config_rule( ConfigRuleNames=[ 'ecr-image-scanning-enabled', 'ecs-task-non-root-user', 'ecs-task-readonly-filesystem' ] ) report = { 'timestamp': datetime.utcnow().isoformat(), 'compliance_summary': response.get('ComplianceSummariesByConfigRule', []), 'overall_compliance': calculate_compliance_percentage(response) } # Save to S3 for audit trail s3.put_object( Bucket='compliance-reports', Key=f"container-compliance-{datetime.utcnow().strftime('%Y-%m-%d')}.json", Body=json.dumps(report, indent=2), ServerSideEncryption='AES256' ) return report def calculate_compliance_percentage(response): """Calculate overall compliance percentage""" total_resources = 0 compliant_resources = 0 for rule_summary in response.get('ComplianceSummariesByConfigRule', []): summary = rule_summary.get('ComplianceSummary', {}) total_resources += summary.get('ComplianceResourceCount', {}).get('CappedCount', 0) compliant_resources += summary.get('CompliantResourceCount', {}).get('CappedCount', 0) if total_resources == 0: return 100.0 return (compliant_resources / total_resources) * 100 Policy Enforcement with OPA (Open Policy Agent)
OPA Policy for Container Security:
# container-security.rego package container.security # Deny if container runs as root deny[msg] { input.container.user == "root" msg := "Container must not run as root user" } # Deny if privileged mode enabled deny[msg] { input.container.privileged == true msg := "Container must not run in privileged mode" } # Deny if readonly root filesystem not enabled deny[msg] { input.container.readonlyRootFilesystem != true msg := "Container must have readonly root filesystem" } # Deny if ALL capabilities not dropped deny[msg] { not "ALL" in input.container.linuxParameters.capabilities.drop msg := "Container must drop ALL capabilities" } Integrate OPA with ECS:
import requests import json def validate_task_definition_with_opa(task_definition): """Validate task definition against OPA policies""" opa_url = "http://opa-service:8181/v1/data/container/security" # Prepare input for OPA input_data = { "container": task_definition['containerDefinitions'][0] } response = requests.post( f"{opa_url}/deny", json={"input": input_data} ) if response.status_code == 200: result = response.json() if result.get('result'): # Policy violations found violations = result['result'] raise ValueError(f"Policy violations: {violations}") return True Complete CI/CD Pipeline with Security
Secure Build Pipeline
# pipeline.yaml version: 0.2 phases: pre_build: commands: - echo Checking base image... - BASE_IMAGE=$(grep '^FROM' Dockerfile | head -1 | awk '{print $2}') - trivy image --severity HIGH,CRITICAL --exit-code 1 "$BASE_IMAGE" - echo Checking for secrets in Dockerfile... - python check-dockerfile-secrets.py build: commands: - echo Building Docker image... - docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG . - docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $REPOSITORY_URI:$IMAGE_TAG post_build: commands: - echo Scanning image with Trivy... - trivy image --severity HIGH,CRITICAL --exit-code 1 $REPOSITORY_URI:$IMAGE_TAG - echo Pushing to ECR... - docker push $REPOSITORY_URI:$IMAGE_TAG - echo Waiting for ECR scan... - | aws ecr wait image-scan-complete \ --repository-name $IMAGE_REPO_NAME \ --image-id imageTag=$IMAGE_TAG - echo Checking ECR scan results... - python check-ecr-scan-results.py $IMAGE_REPO_NAME $IMAGE_TAG - echo Signing image... - aws signer start-signing-job --profile-name payment-app-signing-profile --source ... - echo Validating task definition... - python validate-task-definition.py task-definition.json - echo All security checks passed! Best Practices Summary
Do's ✅
- Use minimal base images (distroless, alpine)
- Multi-stage builds to reduce image size
- Never include secrets in images
- Scan images before deployment
- Sign images for integrity verification
- Run as non-root user
- Drop all capabilities by default
- Use readonly root filesystem
- Set resource limits
- Enable network policies
Don'ts ❌
- Don't use
latesttags in production - Don't run as root user
- Don't include secrets in images or Dockerfiles
- Don't skip scanning before deployment
- Don't use privileged mode unless absolutely necessary
- Don't ignore security findings
- Don't disable read-only filesystem without justification
- Don't forget to sign images
Conclusion
Securing containers requires a comprehensive approach across the entire lifecycle. Key takeaways:
- Minimal base images reduce attack surface significantly
- AWS Secrets Manager eliminates secrets from images
- ECR scanning catches vulnerabilities before deployment
- Image signing ensures integrity and authenticity
- Runtime security (least privilege, isolation) protects running containers
- Compliance automation ensures continuous adherence to standards
The result? A secure, compliant containerized application that meets financial services requirements while maintaining operational efficiency.
Top comments (0)