- Notifications
You must be signed in to change notification settings - Fork 25
chore: changes to jetstack-agent chart to use new Agent image #677
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Ashley Davis <ashley.davis@cyberark.com>
{{- include "jetstack-agent.labels" . | nindent 4 }} | ||
rules: | ||
- apiGroups: ["*.openshift.io"] | ||
- apiGroups: ["route.openshift.io"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fixes the permissions error mentioned in the PR description.
It is a a bug that was already fixed in the venafi-kubernetes-agent chart, here:
After running the test script, the logs now look like this:
$ TLSPK_ORG=staff-busy-sanderson ./hack/install_local_jetstack_secure_chart.sh ... $ kubectl logs -n jetstack-secure $(kubectl get pod -n jetstack-secure -l app.kubernetes.io/instance=jetstack-agent -o jsonpath='{.items[0].metadata.name}') I0722 14:51:16.358377 1 run.go:58] "Starting" logger="Run" version="v1.6.0" commit="32d8a81e90a0811e45ebfe5283004b1ce5ddb7c8" I0722 14:51:16.359438 1 run.go:116] "Healthz endpoints enabled" logger="Run.APIServer" addr=":8081" path="/healthz" I0722 14:51:16.359472 1 run.go:120] "Readyz endpoints enabled" logger="Run.APIServer" addr=":8081" path="/readyz" I0722 14:51:21.524524 1 run.go:233] "Skipping datagatherers for CRDs that can't be found in Kubernetes" logger="Run" datagatherers=["k8s/googlecasissuers","k8s/googlecasclusterissuers","k8s/awspcaissuer","k8s/awspcaclusterissuers","k8s/gateways","k8s/virtualservices","k8s/routes","k8s/venaficlusterissuers","k8s/venafiissuers"] I0722 14:51:23.131141 1 run.go:449] "Data sent successfully" logger="Run.gatherAndOutputData.postData"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I pushed a commit to fix the logged error about openshift routes permissions.
And I tested the chart using the supplied script, applied some test resources and observed some data being sent to the jetstack-secure backend.
apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: test spec: acme: profile: unknown server: https://acme-staging-v02.api.letsencrypt.org/directory privateKeySecretRef: name: test-issuer-key solvers: - dns01: rfc2136: nameserver: 10.0.0.16 --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: www spec: dnsNames: - www-1.jetstack-richard.jetstacker.net secretName: www issuerRef: kind: Issuer name: test
$ kubectl get cert-manager -A NAMESPACE NAME STATE AGE default order.acme.cert-manager.io/www-1-1312294548 errored 4m13s NAMESPACE NAME APPROVED DENIED READY ISSUER REQUESTER AGE default certificaterequest.cert-manager.io/www-1 True False test system:serviceaccount:cert-manager:cert-manager 4m14s NAMESPACE NAME READY SECRET AGE default certificate.cert-manager.io/www False www 4m14s NAMESPACE NAME READY AGE default issuer.cert-manager.io/test True 4m14s
$ kubectl logs -n jetstack-secure $(kubectl get pod -n jetstack-sec ure -l app.kubernetes.io/instance=jetstack-agent -o jsonpath='{.items[0].metadata.name}') I0722 14:55:33.499726 1 run.go:58] "Starting" logger="Run" version="v1.6.0" commit="32d8a81e90a0811e45ebfe5283004b1ce5ddb7c8" I0722 14:55:33.500821 1 run.go:116] "Healthz endpoints enabled" logger="Run.APIServer" addr=":8081" path="/healthz" I0722 14:55:33.500859 1 run.go:120] "Readyz endpoints enabled" logger="Run.APIServer" addr=":8081" path="/readyz" I0722 14:55:38.758814 1 run.go:233] "Skipping datagatherers for CRDs that can't be found in Kubernetes" logger="Run" datagatherers=["k8s/googlecasissuers","k8s/googlecasclusterissuers","k8s/awspcaissuer","k8s/awspcaclusterissuers","k8s/gateways","k8s/virtualservices","k8s/venaficlusterissuers","k8s/venafiissuers"] I0722 14:55:40.272679 1 run.go:449] "Data sent successfully" logger="Run.gatherAndOutputData.postData" I0722 14:56:46.253445 1 run.go:449] "Data sent successfully" logger="Run.gatherAndOutputData.postData" I0722 14:57:52.368513 1 run.go:449] "Data sent successfully" logger="Run.gatherAndOutputData.postData" I0722 14:58:58.395934 1 run.go:449] "Data sent successfully" logger="Run.gatherAndOutputData.postData" I0722 15:00:04.439407 1 run.go:449] "Data sent successfully" logger="Run.gatherAndOutputData.postData" I0722 15:01:10.812473 1 run.go:449] "Data sent successfully" logger="Run.gatherAndOutputData.postData" I0722 15:02:16.839412 1 run.go:449] "Data sent successfully" logger="Run.gatherAndOutputData.postData"

/approve
/lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I notice that there are a couple of manual steps to perform before we can release this updated chart.
Lines 198 to 216 in bd8ce9e
The process is as follows: | |
1. Create a branch. | |
2. Increment version numbers. | |
1. Increment the `version` value in [Chart.yaml](deploy/charts/jetstack-agent/Chart.yaml). | |
DO NOT use a `v` prefix. | |
The `v` prefix [breaks Helm OCI operations](https://github.com/helm/helm/issues/11107). | |
2. Increment the `appVersion` value in [Chart.yaml](deploy/charts/jetstack-agent/Chart.yaml). | |
Use a `v` prefix, to match the Docker image tag. | |
3. Increment the `image.tag` value in [values.yaml](deploy/charts/jetstack-agent/values.yaml). | |
Use a `v` prefix, to match the Docker image tag. | |
4. Update the Helm unit test snapshots: | |
```sh | |
helm unittest ./deploy/charts/jetstack-agent --update-snapshot | |
``` | |
3. Create a pull request and wait for it to be approved. | |
4. Merge the branch | |
5. Manually trigger the Helm Chart workflow: | |
[release_js-agent_chart.yaml](https://github.com/jetstack/enterprise-builds/actions/workflows/release_js-agent_chart.yaml). |
Signed-off-by: Richard Wall <richard.wall@cyberark.com>
…cumented in README.md Signed-off-by: Richard Wall <richard.wall@cyberark.com>
8b7948a
to 6478d30
Compare Tested the release process by pushing a new tag and triggering the release workflow in enterprise-builds repo:
$ helm get metadata -n jetstack-secure jetstack-agent NAME: jetstack-agent CHART: jetstack-agent VERSION: 0.5.0-alpha.0 APP_VERSION: v1.6.0 ANNOTATIONS: DEPENDENCIES: NAMESPACE: jetstack-secure REVISION: 2 STATUS: deployed DEPLOYED_AT: 2025-07-22T16:42:27+01:00 |
After verifying the release process with an alpha.0 pre-release Signed-off-by: Richard Wall <richard.wall@cyberark.com>
The old preflight image is not maintained or built any more. This PR changes the old chart on the legacy-jetstack-secure branch to use the new Agent image by default.
The env var changes for POD_NAMESPACE and POD_NAME are required to prevent errors due to changes in the Agent image.
POD_UID and POD_NODE are included because it seems harmless.
Still required: this will prints errors since the new agent image will look for OpenShift routes:
We might need to expand the permissions in the legacy chart, or else prevent this check being made by the agent.