DEV Community

Cover image for AWS CLI & Terraform for Beginners: Deploying Infrastructure as Code (Part-2)
AnkitDevCode
AnkitDevCode

Posted on • Edited on

AWS CLI & Terraform for Beginners: Deploying Infrastructure as Code (Part-2)

In the previous article, we saw how to set up Terraform to build our infrastructure using code. Now, we’ll learn how to use GitLab’s automation to deploy our changes. That means whenever we change something in our code, GitLab will automatically apply those changes—so we don’t have to do it by hand every time.

Setup Overview

  • Version Control with GitLab: Store your Terraform configurations in a GitLab repository. This approach takes advantage of GitLab’s version control capabilities to track and review changes in infrastructure code.

  • Automated Pipelines: Utilize GitLab CI/CD pipelines to automate the process of testing and deploying the Terraform configurations. Pipelines can be configured to trigger on commits, merging requests, or manual requests.

  • AWS Integration: Configure Terraform to manage AWS resources by using the AWS provider. The AWS credentials and access controls should be securely managed, often using environment variables or encrypted secrets in GitLab.

Typical Pipeline Steps

  • Initiate: Initialize Terraform configurations in the pipeline. This step sets up Terraform to manage the specified resources.

  • Validate: used to verify the correctness of Terraform configuration files within a directory. It performs a static analysis of the configuration to ensure it is syntactically valid and internally consistent.

  • Plan: Execute terraform plan to preview changes without applying them. This helps in reviewing potential impacts before changes go live.

  • Apply: Run terraform apply to apply the planned changes to your AWS environment. This step can be set to manual to require approval, ensuring changes are reviewed.

  • Destroy: The terraform destroy command is used to delete all infrastructure that Terraform has created and is currently managing. it removes all resources defined in your .tf files and tracked in the state file — EC2 instances, VPCs, S3 buckets, etc.

Use Protected Variables
Store sensitive information like cloud provider credentials
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, GITLAB_USERNAME, and GITLAB_TOKEN as protected CI/CD variables.

Please see below guides:

GitLab access token

Amazon access key

GitLab CI/CD variable

Create a .gitlab-ci.yml file:

Define the CI/CD pipeline stages and terraform image.

# Global variables variables: TF_ROOT: "terraform" STATE_FILE_NAME: "terraform.tfstate" TF_ADDRESS: "https://gitlab.com/api/v4/projects/${CI_PROJECT_ID}/terraform/state/${STATE_FILE_NAME}" TF_USERNAME: ${GITLAB_USERNAME} TF_PASSWORD: ${GITLAB_TOKEN} default: image: name: hashicorp/terraform:latest entrypoint: [""] cache: - key: "${CI_COMMIT_REF_SLUG}-terraform" paths: - $TF_ROOT/.terraform/ - $TF_ROOT/.terraform.lock.hcl # Define stages stages: - init - validate - plan - apply - destroy before_script: - echo "Initializing Terraform in $TF_ROOT" - export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} - export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} - cd $TF_ROOT terraform_init: stage: init script: - terraform --version - terraform init -backend-config=address=${TF_ADDRESS} -backend-config=lock_address=${TF_ADDRESS}/lock -backend-config=unlock_address=${TF_ADDRESS}/lock -backend-config=username=${TF_USERNAME} -backend-config=password=${TF_PASSWORD} -backend-config=lock_method=POST -backend-config=unlock_method=DELETE -backend-config=retry_wait_min=5 artifacts: paths: - $TF_ROOT/.terraform - $TF_ROOT/.terraform.lock.hcl expire_in: 1 hour terraform_validate: stage: validate script: - terraform fmt -check -recursive - terraform validate dependencies: - terraform_init terraform_plan: stage: plan script: - terraform plan -out=tfplan dependencies: - terraform_validate artifacts: paths: - $TF_ROOT/tfplan expire_in: 1 hour when: on_success only: - main - /^feature\/.*$/ # Stage for applying the Terraform apply terraform_apply: stage: apply script: - terraform show tfplan - terraform apply -auto-approve tfplan artifacts: paths: - $TF_ROOT/terraform.tfstate expire_in: 1 hour when: manual only: - main - /^feature\/.*$/ terraform_destroy: stage: destroy script: - terraform destroy -auto-approve when: manual only: - main - /^feature\/.*$/ 
Enter fullscreen mode Exit fullscreen mode

⚠️ Note : apply/destroy is not recommended on feature branches.

  • Use terraform plan only on feature branches
  • Gate apply/destroy behind merge to a protected branch
  • Require approvals or deploy tags to trigger apply
  • Use plan outputs to validate changes before applying

Amazon web console

Once you run terraform apply, Terraform provisions or updates your EC2 instance(s) based on the configuration in your .tf files. Here's what you'll see in the Amazon Web Console after a successful apply

Terraform destroy

After a terraform destroy, your EC2 instance is gone — but if you want to run a post-destroy job, such as cleanup tasks, notifications, or archiving logs, you’ll need to orchestrate it outside Terraform since Terraform doesn’t natively support post-destroy hooks.

✅ Easy Post-Destroy Steps

  • Run a cleanup script after terraform destroy
  • Remove leftover resources (like EBS volumes or Elastic IPs)
  • Send a notification (Slack, email, etc.)
  • Archive logs or Terraform state
  • Delete secrets and SSH keys linked to the EC2 instance
  • Validate in AWS Console that everything’s gone
  • Optional: Use a wrapper script or CI/CD job to automate all this

Terraform Project Structure

repo-root/
├── terraform/
│ ├── main.tf
│ ├── variables.tf
│ ├── outputs.tf
│ └── ...
└── .gitlab-ci.yml

variables.tf

variable "region" { description = "AWS region" type = string default = "ap-southeast-2" } variable "ami_id" { description = "AMI ID for the EC2 instance" default = "ami-0310483fb2b488153" type = string } variable "instance_type" { description = "Instance type for the EC2 instance" type = string default = "t2.micro" } 
Enter fullscreen mode Exit fullscreen mode

main.tf

provider "aws" { region = var.region } resource "aws_instance" "my_instance" { ami = var.ami_id instance_type = var.instance_type } 
Enter fullscreen mode Exit fullscreen mode

outputs.tf

output "web_server_public_ip" { value = aws_instance.my_instance.public_ip description = "The public IP address of the web server." } 
Enter fullscreen mode Exit fullscreen mode

🧩 Workflow Overview

1. terraform init

  • Initializes working directory.
  • Downloads required provider plugins (e.g., AWS).
  • Sets up backend if remote state is configured.

2. terraform validate

  • Validates .tf syntax and structure.
  • Ensures configuration is logically sound.

3. terraform plan

  • Generates an execution plan.
  • Displays actions Terraform will take:
    • Create EC2 instance
    • Attach security group
    • Allocate volumes
  • Allows review before making any changes.

4. terraform apply

  • Executes the plan by interacting with AWS APIs.
  • Creates the EC2 instance with specified parameters:
    • AMI ID
    • Instance type
    • Key pair
    • Subnet
  • Applies tags and IAM roles (if defined).
  • Prompts for approval unless -auto-approve is used.

5. 📡 Provisioning via AWS

  • AWS provisions the instance in your chosen region and AZ.
  • Network interfaces, volumes, and security configurations are applied.
  • Terraform waits until the EC2 reaches the running state.

6. terraform state update

  • .tfstate file is modified to reflect actual infrastructure.
  • Stores metadata including EC2 instance ID, IP, etc.
  • Essential for managing future changes and avoiding drift.

7. Output Variables (optional)

  • Displays configured outputs:
    • Public IP
    • Instance hostname
    • EC2 tags, etc.

References

🧠 AI Assistance — Content and explanations are partially supported by ChatGPT, Microsoft Copilot, and GitLab Duo, following AWS documentations.

Top comments (0)