- Fork this project by pressing the fork buton
- Locally clone your Forked version. You can now begin modifying it.
The supplied flask app is a very simple api with three endpoints. GET '/': This is a simple health check, which returns the response 'Healthy'. POST '/auth': This takes a email and password as json arguments and returns a jwt token base on a custom secret. GET '/contents': This requires a valid jwt token, and returns the un-encrpyted contents of that token.
-
Install python dependencies. These dependencies are kept in a requirements.txt file. To install them, use pip:
pip install -r requirements.txt
-
Set up the environment. The following environment variable is required:
JWT_SECRET - The secret used to make the JWT token, for the purpose of this course it can be any string.
The following environment variable is optional:
LOG_LEVEL - The level of logging. Will default to 'INFO', but when debugging an app locally, you may want to set it to 'DEBUG'
export JWT_SECRET=myjwtsecret export LOG_LEVEL=DEBUG
-
Run the app using the Flask server, from the flask-app directory, run:
python app/main.py
To try the api endpoints, open a new shell and run, replacing '<EMAIL>' and '<PASSWORD>' with and any values:
export TOKEN=`curl -d '{"email":"<EMAIL>","password":"<PASSWORD>"}' -H "Content-Type: application/json" -X POST localhost:80/auth | jq -r '.token'`
This calls the endpoint 'localhost:80/auth' with the '{"email":"","password":""}' as the message body. The return value is a jwt token based on the secret you supplied. We are assigning that secret to the environment variable 'TOKEN'. To see the jwt token, run:
echo $TOKEN
To call the 'contents' endpoint, which decrpyts the token and returns it content, run:
curl --request GET 'http://127.0.0.1:80/contents' -H "Authorization: Bearer ${TOKEN}" | jq .
You should see the email that you passed in as one of the values.
-
Install Docker: installation instructions
-
Create a Docker file. A Docker file decribes how to build a Docker image. Create a file named 'Dockerfile' in the app repo. The contents of the file describe the steps in creating a Docker image. Your Dockerfile should:
- use the 'python:strech' image as a source image
- Setup an app directory for your code
- Install needed python requirements
- Define an entrypoint which will run the main app using the gunicorn WSGI server
gunicorn should be run with the arguments:
gunicorn -b :8080 main:APP
-
Create a file named 'env_file' and use it to set the environment variables which will be run locally in your container. Here we do not need the export command, just an equals sign:
\<VARIABLE-NAME\>=\<VARIABLE-VALUE\> -
Build a Local Docker Image. To build a Docker image run:
docker build -t jwt-api-test . -
Run the image locally, using the 'gunicorn' server:
docker run --env-file=env_file -p 80:8080 jwt-api-test
To use the endpoints use the same curl commands as before:
export TOKEN=`curl -d '{"email":"<EMAIL>","password":"<PASSWORD>"}' -H "Content-Type: application/json" -X POST localhost:80/auth | jq -r '.token'`
curl --request GET 'http://127.0.0.1:80/contents' -H "Authorization: Bearer ${TOKEN}" | jq .
- Create an EKS cluster named 'simpe-jwt-api'
You will now create a pipeline which watches your Github. When changes are checked in, it will build a new image and deploy it to your cluster.
-
Create an IAM role that CodeBuild can use to interact with EKS. :
- Set an environment variable
ACCOUNT_IDto the value of your AWS account id. You can do this with awscli:bash ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) - Create a role policy document that allows the actions "eks:Describe*" and "ssm:GetParameters". You can do this by setting an environment variable with the role policy:
TRUST="{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::${ACCOUNT_ID}:root\" }, \"Action\": \"sts:AssumeRole\" } ] }" ```
- Create a role named 'UdacityFlaskDeployCBKubectlRole' using the role policy document:
aws iam create-role --role-name UdacityFlaskDeployCBKubectlRole --assume-role-policy-document "$TRUST" --output text --query 'Role.Arn'
- Create a role policy document that also allows the actions "eks:Describe*" and "ssm:GetParameters". You can create the document in your tmp directory:
echo '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "eks:Describe*", "ssm:GetParameters" ], "Resource": "*" } ] }' > /tmp/iam-role-policy ```
- Attach the policy to the 'UdacityFlaskDeployCBKubectlRole'. You can do this using awscli:
aws iam put-role-policy --role-name UdacityFlaskDeployCBKubectlRole --policy-name eks-describe --policy-document file:///tmp/iam-role-policy ```
You have now created a role named 'UdacityFlaskDeployCBKubectlRole'
- Set an environment variable
-
Grant the role access to the cluster. The 'aws-auth ConfigMap' is used to grant role based access control to your cluster.
ROLE=" - rolearn: arn:aws:iam::$ACCOUNT_ID:role/UdacityFlaskDeployCBKubectlRole\n username: build\n groups:\n - system:masters" kubectl get -n kube-system configmap/aws-auth -o yaml | awk "/mapRoles: \|/{print;print \"$ROLE\";next}1" > /tmp/aws-auth-patch.yml kubectl patch configmap/aws-auth -n kube-system --patch "$(cat /tmp/aws-auth-patch.yml)"
-
Generate a GitHub access token. A Github acces token will allow CodePipeline to monitor when a repo is changed. A token can be generated here. This token should be saved somewhere that is secure.
-
The file buildspec.yml instructs CodeBuild. We need a way to pass your jwt secret to the app in kubernetes securly. You will be using AWS parameter-store to do this. First add the following to your buildspec.yml file:
env: parameter-store: JWT_SECRET: JWT_SECRET
This lets CodeBuild know to set an evironment variable based on a value in the parameter-store.
-
Put secret into AWS Parameter Store
aws ssm put-parameter --name JWT_SECRET --value "YourJWTSecret" --type SecureString -
Modify CloudFormation template.
There is file named ci-cd-codepipeline.cfn.yml, this the the template file you will use to create your CodePipeline pipeline. Open this file and go to the 'Parameters' section. These are parameters that will accept values when you create a stack. Fill in the 'Default' value for the following:
- EksClusterName : use the name of the EKS cluster you created above
- GitSourceRepo : use the name of your project's github repo.
- GitHubUser : use your github user name
- KubectlRoleName : use the name of the role you created for kubectl above
Save this file.
-
Create a stack for CodePipeline
- Go the the CloudFormation service in the aws console.
- Press the 'Create Stack' button.
- Choose the 'Upload template to S3' option and upload the template file 'ci-cd-codepipeline.cfn.yml'
- Press 'Next'. Give the stack a name, fill in your GitHub login and the Github access token generated in step 9.
- Confirm the cluster name matches your cluster, the 'kubectl IAM role' matches the role you created above, and the repository matches the name of your forked repo.
- Create the stack.
You can check it's status in the CloudFormation console.
-
Check the pipeline works. Once the stack is successfully created, commit a change to the master branch of your github repo. Then, in the aws console go to the CodePipeline UI. You should see that the build is running.
-
To test your api endpoints, get the external ip for your service:
kubectl get services simple-jwt-api -o wide
Now use the external ip url to test the app:
export TOKEN=`curl -d '{"email":"<EMAIL>","password":"<PASSWORD>"}' -H "Content-Type: application/json" -X POST <EXTERNAL-IP URL>:80/auth | jq -r '.token'` curl --request GET '<EXTERNAL-IP URL>:80/contents' -H "Authorization: Bearer ${TOKEN}" | jq
-
Paste the external id from above below this line for the reviewer to use:
EXTERNAL IP:
-
Add running tests as part of the build.
To require the unit tests to pass before our build will deploy new code to your cluster, you will add the tests to the build stage. Remember you installed the requirements and ran the unit tests locally at the beginning of this project. You will add the same commands to the buildspec.yml:
- Open buildspec.yml
- In the prebuild section, add a line to install the requirements and a line to run the tests. You may need to refer to 'pip' as 'pip3' and 'python' as 'python3'
- save the file
-
You can check the tests prevent a bad deployment by breaking the tests on purpose:
- Open the test_main.py file
- Add
assert Falseto any of the tests - Commit your code and push it to Github
- Check that the build fails in CodePipeline