I'm not sure what I'm doing wrong, but after following the documentation for setting up kubernetes auth with vault, it doesn't seem to work.
My steps for setting up vault are as follows:
# install vault helm repo add hashicorp https://helm.releases.hashicorp.com helm install vault hashicorp/vault -n vault --create-namespace \ --set "injector.enabled=false" \ --set "server.dev.enabled=true" # enable vault kubernetes authentication kubectl -n vault exec vault-0 -- vault auth enable kubernetes # configure vault to use kubernetes auth kubectl -n vault exec vault-0 -- vault write auth/kubernetes/config \ kubernetes_host=https://kubernetes.default.svc.cluster.local \ disable_local_ca_jwt=true # create role to authenticate with (use root policy just for testing purposes) kubectl -n vault exec vault-0 -- vault write auth/kubernetes/role/my-app \ bound_service_account_names=my-app \ bound_service_account_namespaces=vault \ alias_name_source=serviceaccount_name \ token_policies=root \ ttl=1h My app is deployed via helm chart as well, I won't post all the code for that here, but I will show the important bits: The result of helm template on my app shows the following
--- apiVersion: v1 kind: ServiceAccount metadata: name: my-app automountServiceAccountToken: false --- apiVersion: apps/v1 kind: Deployment metadata: name: my-app labels: app: my-app spec: selector: matchLabels: app: my-app replicas: 1 template: metadata: labels: app: my-app spec: volumes: - name: vault-token projected: sources: - serviceAccountToken: path: vault-token expirationSeconds: 3600 audience: vault serviceAccountName: my-app containers: - name: my-app image: "repo/my-app:latest" imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /var/run/secrets/tokens name: vault-token The helm installation works, and I log the jwt from the service account in the container (for debugging). When I decode the jwt it looks like this:
{ "aud": [ "vault" ], "exp": 1724541540, "iat": 1724537940, "iss": "https://kubernetes.default.svc.cluster.local", "jti": "497a8b8e-6cb0-4eb0-966b-3d659ecc4c60", "kubernetes.io": { "namespace": "vault", "node": { "name": "ip-10-0-21-203", "uid": "8383ada6-253e-4d1a-a2a1-8ab7fd510461" }, "pod": { "name": "my-app-55bd645df8-rt6qv", "uid": "cf15e767-79de-40ca-ab79-9f64c2f6b25d" }, "serviceaccount": { "name": "my-app", "uid": "e38db1ee-20a6-4144-a046-e7b8cadff640" } }, "nbf": 1724537940, "sub": "system:serviceaccount:vault:my-app" } I then use the jwt to try and authenticate with vault directly (just to test that it works)
kubectl exec vault-0 -n vault -- vault write auth/kubernetes/login role=my-app jwt=$MY_JWT However, I keep getting the following response
Error writing data to auth/kubernetes/login: Error making API request. URL: PUT http://127.0.0.1:8200/v1/auth/kubernetes/login Code: 403. Errors: * permission denied command terminated with exit code 2
I've been trying to figure out how to make this work but it seems like everything I try results in the same error. Hoping someone can spot my mistake, or point me to a better way of debugging. The audit logs for vault have not been very helpful, they simply say "permission denied" as well.
As far as I can tell from the documentation, my current set up/configuration should work, any idea why this might be failing?
FYI: I am running these commands against a k3s kubernetes cluster.