The Cornerstone Collection is a lab provisioning tool built on Red Hat Ansible Automation Platform. It streamlines the process of creating virtual machines (VMs) across on-premises environments (libvirt) and cloud-based platforms (such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP)). Figure 1 depicts this. In this article, we’ll walk through a simple demonstration of how to deploy a test instance on AWS and GCP using the collection.

For those of you who are more text-based learners, you can follow this tutorial to understand how the collection works and how to raise a VM. For others that favor a more visual learning style, you can follow the tutorial in the following videos.
How does it all work?
The collection comprises of 2 roles: pre-req-install
and cornerstone
. Both of them leverage a global vars file called main.yml
which lives in a separate directory next to the playbooks that call these roles for the provisioning process. The reason for this is that main.yml
contains all the variables that are integral to building the VMs, thus increasing a need for ease of access.
Playbook ordering:
``` [oezeakac@oezeakac-thinkpadt14gen3 cornerstone-playbooks]$ tree . ├── pre-req-run.yml ├── run.yml └── vars └── main.yml ```
pre-req-install
role:
This role automates the installation and configuration of packages and tools needed to create the virtual machines. The role is utilized via a playbook (pre-req-run.yml
) as shown below and uses the main.yml
vars file to complete a successful install. If you look at the main.yml
excerpt below, you can see examples on what variables to provide to raise the VM in AWS or GCP.
pre-req-run.yml
:
``` --- - name: Build Instance hosts: localhost vars_files: - vars/main.yml tasks: - include_role: name: cornerstone.cornerstone.pre_req_install ```
Full examples can be found below:
/vars/main.yml
(AWS):
```yaml aws_system_user: aws_profile: #Ex: profile1 aws_access_key: #access-key aws_secret_key: # secret-key aws_region: #Ex: "eu-west-1" aws_format: #Ex: table foundation: #Ex: "aws" key_name: #Ex: aws_keypair key_file: #Ex: /home/oezeakac/labs/Ansible/cornerstone-playbooks/aws_keypair #non-root dir ansible_python_interpreter: #Ex: /usr/bin/python3.11 cornerstone_prefix: #Ex: cs cornerstone_ssh_admin_username: #Ex: rhadmin cornerstone_ssh_admin_pubkey: #Ex: rsa.pub cornerstone_aws_ssh_key_name: #Ex: "{{ key_name }}" cornerstone_aws_profile: #Ex: default cornerstone_ssh_user: #Ex: ec2-user cornerstone_ssh_key_path: #Ex: "ssh_key.pem" cornerstone_platform: #Ex: aws cornerstone_location: #Ex: eu-west-1 ```
/vars/main.yml
(GCP):
```yaml foundation: #Ex: "gcp" ```
The foundation variable is used to determine set-up tasks that will run under the tasks folder of the role. Take a look at the main.yml
file under the tasks directory. If the value of the foundation variable is set to aws
then the aws.yml
playbook will run and execute aws-related installation tasks. If the foundation variable is set to gcp
then the gcp.yml
playbook will run and execute GCP-related tasks. The same logic applies for the values Libvrt and Azure.
/tasks/main.yml
:
```yaml - name: AWS Pre-Req Install ansible.builtin.include_tasks: "aws.yml" when: foundation == 'aws' - name: Azure Pre-Req Install ansible.builtin.include_tasks: "azure.yml" when: foundation == 'azure' - name: Libvrt Pre-Req Install ansible.builtin.include_tasks: "libvrt.yml" when: foundation == 'libvrt' - name: GCP AWS Pre-Req Install ansible.builtin.include_tasks: "gcp.yml" when: foundation == 'gcp' ```
When ready populate the vars file with the relevant information and run the playbook with the following command:
``` ansible-navigator run pre-req-run.yml ```
Cornerstone role
This role carries out the task of creating the virtual machines in their chosen cloud environments.
run.yml
:
``` --- - name: Build Instance hosts: localhost vars_files: - vars/main.yml tasks: - include_role: name: cornerstone.cornerstone.Cornerstone ```
/vars/main.yml
(AWS):
```yaml cornerstone_sg: - name: "testworkshop-sg" description: Security group for aws region: "{{ cornerstone_location }}" rules: - proto: tcp from_port: 22 to_port: 22 group_name: "" cidr_ip: 0.0.0.0/0 rule_desc: "allowSSHin_all" - proto: tcp from_port: 443 to_port: 443 group_name: "" cidr_ip: 0.0.0.0/0 rule_desc: "allowHttpsin_all" - proto: all from_port: "" to_port: "" group_name: "testworkshop-sg" cidr_ip: 0.0.0.0/0 rule_desc: "allowAllfromSelf" vm_state: present guests: testsystem1: cornerstone_vm_state: #Ex: "{{vm_state}}" cornerstone_platform: #Ex: aws cornerstone_tag_purpose: #Ex: "Testing" cornerstone_tag_role: #Ex: "testsystem" cornerstone_vm_name: #Ex: testsystem cornerstone_location: #Ex: eu-west-1 cornerstone_vm_aws_az: #Ex: eu-west-1a cornerstone_vm_flavour: #Ex: t3.2xlarge cornerstone_vm_aws_ami: #Ex: ami-0b04ce5d876a9ba29 cornerstone_vm_aws_sg: #Ex: obitestworkshop-sg cornerstone_virtual_network_name: #Ex: "{{ cornerstone_prefix }}vnet" cornerstone_virtual_network_cidr: #Ex: "10.1.0.0/16" cornerstone_subnet_name: #Ex: "{{ cornerstone_prefix }}subnet" cornerstone_public_private_ip: #Ex: public cornerstone_vm_private_ip: cornerstone_vm_assign_public_ip: #Ex: yes cornerstone_vm_public_ip: #Ex: 63.34.15.95 cornerstone_publicip_allocation_method: #Ex: Dynamic cornerstone_publicip_domain_name: #Ex: null cornerstone_vm_os_disk_size: #Ex: 10 cornerstone_vm_data_disk: #Ex: false cornerstone_vm_data_disk_device_name: #Ex: "/dev/xvdb" cornerstone_aws_vm_data_disk_managed: #Ex: "gp2" cornerstone_vm_data_disk_size: #Ex: "50" ```
/vars/main.yml
(GCP):
```yaml cornerstone_prefix: #Ex: cs cornerstone_platform: #Ex: gcp ansible_python_interpreter: #Ex: /usr/bin/python3.11 cornerstone_gcp_project: #Ex: openenv-d2p2t cornerstone_gcp_auth_kind: #Ex: "serviceaccount" cornerstone_service_account_file: #Ex: /home/oezeakac/labs/Ansible/cornerstone-playbooks/openenv-prbtd-6d274028b755.json - This is the private key cornerstone_virtual_network_name: #Ex: "{{ cornerstone_prefix }}-vnet" cornerstone_location: #Ex: europe-west2 cornerstone_gcp_zone: #Ex: europe-west2-a cornerstone_virtual_network_cidr: #Ex: 172.16.0.0/28 cornerstone_gcp_use_serviceaccount: true cornerstone_ssh_admin_username: root user cornerstone_ssh_admin_pubkey: #<generate pub ssh key and sub in here when running run.yml> ocp4_platform: gcp cornerstone_sg: - name: "cs-sg" description: "firewall rules" rules: - proto: tcp from_port: 22 vm_state: present guests: bastion: cornerstone_platform: #Ex: gcp cornerstone_gcp_project: #Ex: openenv-d2p2t cornerstone_gcp_auth_kind: #Ex: "serviceaccount" cornerstone_service_account_file: "" #Ex:/home/oezeakac/labs/Ansible/cornerstone-playbooks/openenv-prbtd-6d274028b755.json cornerstone_working_dir: #Ex:'/tmp/' cornerstone_vm_state: #Ex:"{{vm_state}}" cornerstone_vm_name: #Ex: "bastion" cornerstone_location: #Ex: europe-west2 cornerstone_gcp_zone: #Ex: europe-west2-a cornerstone_virtual_network_name: #Ex: "{{ cornerstone_prefix }}-vnet" cornerstone_virtual_network_cidr: #Ex: 172.16.0.0/28 cornerstone_subnet_name: #Ex: "{{ cornerstone_prefix }}subnet" cornerstone_vm_flavour: #Ex: e2-medium cornerstone_vm_gcp_source_image: #Ex:"projects/rhel-cloud/global/images/rhel-8-v20230411" cornerstone_vm_os_disk_size: #Ex: 30 cornerstone_tag_purpose: #Ex: "bastion" cornerstone_tag_role: #Ex: "testing" ```
Like the previous role, cornerstone is used via a playbook which leverages the role and main.yml
in the vars directory. If you look back at the example vars file above, you can see it contains the configuration information to create the virtual machine in your chosen cloud platform. So if you’ve chosen AWS, things such as VPC, security groups, EC2 flavor, and etc. are all detailed here. The cornerstone-platform
variable determines the type of tasks that need to be run to complete provisioning. Under the tasks folder the main.yml
checks the value of cornerstone-platform
and if it’s set to e.g., aws
, all the task files that are AWS-specific are run. The same goes for GCP if the value of the variable is set to gcp
.
Run the playbook using the command below:
``` ansible-navigator run run.yml ```
Once that’s complete you can head to the AWS console and check under the EC2 Dashboard to confirm that a test instance has been created. See Figure 2.

For the bastion server in GCP, if you log into the console you can see that’s been generated (Figure 3).
