Red Hat OpenShift
Configuration for ACK controllers in an OpenShift cluster.
Pre-installation instructions
When ACK service controllers are installed via OperatorHub, a cluster administrator will need to perform the following pre-installation steps to provide the controller any credentials and authentication context it needs to interact with the AWS API.
Configuration and authentication in OpenShift requires the use of IAM users and policies. Authentication credentials are set inside a Secret (optional if utilizing IRSA) before installation of the controller.
Step 1: Create the installation namespace
If the default ack-system namespace does not exist already, create it:
oc new-project ack-system Step 2: Bind an AWS IAM principal to a service user account
Create a user with the aws CLI (named ack-elasticache-service-controller in our example):
aws iam create-user --user-name ack-elasticache-service-controller Enable programmatic access for the user you just created:
aws iam create-access-key --user-name ack-elasticache-service-controller You should see output with important credentials:
{ "AccessKey": { "UserName": "ack-elasticache-service-controller", "AccessKeyId": "00000000000000000000", "Status": "Active", "SecretAccessKey": "abcdefghIJKLMNOPQRSTUVWXYZabcefghijklMNO", "CreateDate": "2021-09-30T19:54:38+00:00" } } Save or note AccessKeyId and SecretAccessKey for later use.
Each service controller repository provides a recommended policy ARN for use with the controller. For an example, see the recommended policy for Elasticache here.
Attach the recommended policy to the user we created in the previous step:
aws iam attach-user-policy \ --user-name ack-elasticache-service-controller \ --policy-arn 'arn:aws:iam::aws:policy/AmazonElastiCacheFullAccess' Step 3: Create ack-$SERVICE-user-config and ack-$SERVICE-user-secrets for authentication
Enter the ack-system namespace. Create a file, config.txt, with the following variables, leaving ACK_WATCH_NAMESPACE blank so the controller can properly watch all namespaces, and change any other values to suit your needs:
ACK_ENABLE_DEVELOPMENT_LOGGING=true ACK_LOG_LEVEL=debug ACK_WATCH_NAMESPACE= AWS_REGION=us-west-2 AWS_ENDPOINT_URL= ACK_RESOURCE_TAGS=hellofromocp ENABLE_LEADER_ELECTION=true LEADER_ELECTION_NAMESPACE= RECONCILE_DEFAULT_MAX_CONCURRENT_SYNCS=1 FEATURE_FLAGS= FEATURE_GATES= Now use config.txt to create a ConfigMap in your OpenShift cluster:
export SERVICE=elasticache oc create configmap \ --namespace ack-system \ --from-env-file=config.txt ack-$SERVICE-user-config The Secret is optional if IRSA is intended to be used. In order to utilize IRSA, STS would have needed to be configured during cluster installation. There are two ways to provision an OpenShift cluster to utilize STS:
Save another file, secrets.txt, with the following authentication values, which you should have saved from earlier when you created your user’s access keys:
AWS_ACCESS_KEY_ID=00000000000000000000 AWS_SECRET_ACCESS_KEY=abcdefghIJKLMNOPQRSTUVWXYZabcefghijklMNO Use secrets.txt to create a Secret in your OpenShift cluster:
oc create secret generic \ --namespace ack-system \ --from-env-file=secrets.txt ack-$SERVICE-user-secrets Delete config.txt and secrets.txt.
ConfigMap or the Secret from the values given above, i.e. ack-$SERVICE-user-config and ack-$SERVICE-user-secrets, then installations from OperatorHub will not function properly. The Deployment for the controller is preconfigured for these key values.Step 4 (Optional): Apply Additional Custom Resource Definitions(CRD)
To prevent CRD installation conflicts for CRDs shared across multiple AWS Controllers for Kubernetes, the AdoptedResource and FieldExport CRDs are not included in the OpenShift Embedded OperatorHub. These must be installed manually by a cluster administrator before any controller is installed by running the following commands:
Apply the AdoptedResource CRD
oc apply -f https://raw.githubusercontent.com/aws-controllers-k8s/runtime/main/config/crd/bases/services.k8s.aws_adoptedresources.yaml Apply the FieldExport CRD
oc apply -f https://raw.githubusercontent.com/aws-controllers-k8s/runtime/main/config/crd/bases/services.k8s.aws_fieldexports.yaml Step 5: Install the controller
Follow the instructions for installing the controller using OperatorHub.
Additional uninstallation steps
Perform the following cleanup steps in addition to the steps in Uninstall an ACK Controller.
Uninstall the ACK Controller
Navigate in the OpenShift dashboard to the OperatorHub page and search for the controller name. Select Uninstall to remove the controller.
Delete ConfigMap
Delete the following ConfigMap you created in pre-installation:
oc delete configmap ack-$SERVICE-user-config Delete user Secret
Delete the folllowing Secret you created in pre-installation:
oc delete secret ack-$SERVICE-user-secrets Next Steps
After you install the controller, you can follow the Cross Account Resource Management instructions to manage resources in multiple AWS accounts.