gitopsWithArgoCD-Docs
gitops-argocd-Repo
1. What is GitOps?
GitOps is a modern approach to managing and deploying infrastructure and applications using Git as the single source of truth.
Instead of manually applying changes to servers or clusters, you declare the desired system state in Git, and an automated tool keeps your running systems in sync with that state.
Core Principles :
- Declarative configuration: All infrastructure and application configs are stored as code in Git repositories (e.g., Kubernetes manifests, Helm charts, Terraform modules).
- Version-controlled: Every change is done through Git commits, pull requests, and code reviews—giving you full history, auditability, and rollback capability.
- Automated reconciliation: Specialized GitOps controllers (like Argo CD or Flux) continuously watch the Git repo and the live environment. If something drifts, they automatically sync it back to match the Git state. The GitOps operator
ArgoCD
also makes sure that the entire system is self-healing to reduce the risk of human errors. The operator continuously loops through three steps, observe, diff, and act. Inthe observe step
, it checks the Git repository for any changes in the desired state. Inthe diff step
, it compares the resources received from the previous stepthe observe step
to the actual state of the cluster ( It performs a diff (comparison) between the desired state (Git) and the actual state (cluster), Any mismatch is called “drift” ). And inthe Act step
, it uses a reconciliation function and tries to match the actual state to the desired state ( If a drift is found, the operator runs its reconciliation logic, It applies the necessary changes to bring the cluster back to match Git, This ensures the system self-heals from manual or accidental changes ).
Desired State: Git , Actual State: Kubernetes Cluster
- Continuous deployment: Merging to the main branch becomes the trigger for deployments, replacing manual
kubectl apply
or ad-hoc scripts.
Benefits :
- Full audit trail of all infrastructure and app changes.
- Rollback is as simple as reverting a Git commit.
- Strong security (no direct access to production clusters needed - pull the desired state from Git and apply it in one or more environments or clusters)
- Enables collaboration and consistent workflows across teams.
Push vs Pull based Deployments :
1. Push-based Deployment :
How it works :
- The CI/CD system (like Jenkins) builds the code, creates container images, pushes them to a registry, and then directly applies manifests to the Kubernetes cluster using kubectl apply.
- The CI/CD system has Read-Write (RW) access to the cluster.
Key points from the image :
- ✅ Easy to deploy Helm charts.
- ✅ Easy to inject container version updates via the build pipeline.
- ✅ Simpler secret management from inside the pipeline.
- ❌ Cluster config is embedded in the CI system (tightly coupled).
- ❌ CI system holds RW access to the cluster (security risk).
- ❌ Deployment approach is tied to the CD system (less flexible).
2. Pull-based Deployment (GitOps) :
How it works :
- The CI system builds and pushes images to a registry.
- The desired manifests are committed to a Git repository.
- A GitOps operator (like Argo CD) inside the cluster pulls the manifests from Git and syncs them to the cluster, reconciling continuously.
Key points from the image :
- ✅ No external user/client can modify the cluster (only GitOps operator can).
- ✅ Can scan container registries for new versions.
- ✅ Secrets can be managed via Git repo + Vault.
- ✅ Not coupled to the CD pipeline — independent.
- ✅ Supports multi-tenant setups.
- ❌ Managing secrets for Helm deployments is harder.
- ❌ Generic secret management is more complex.
2. ArgoCD Basics
What is Argo CD?
Argo CD (Argo Continuous Delivery) is a GitOps tool for Kubernetes that:
- Runs inside your cluster as a controller.
- Continuously monitors a Git repository that stores your Kubernetes manifests (YAML, Helm, Kustomize, etc.).
- Automatically applies and syncs those manifests to the cluster.
- Provides a web UI, CLI, and API to visualize and manage application deployments.
Installation :
helm repo add argo https://argoproj.github.io/argo-helm helm repo update kubectl create namespace argocd helm install argocd argo/argo-cd -n argocd # Optional: Specify custom values helm show values argo/argo-cd > values.yaml helm install argocd argo/argo-cd -n argocd -f values.yaml kubectl port-forward svc/argocd-server --address=0.0.0.0 -n argocd 8080:443 # Get the Initial Admin Password kubectl -n argocd get secret argocd-initial-admin-secret \ -o jsonpath="{.data.password}" | base64 -d; echo
Why use Argo CD?
Argo CD solves common deployment and operations problems by embracing GitOps principles:
Problem | How Argo CD Helps |
---|---|
Manual kubectl apply steps | Automates syncing from Git |
Configuration drift (manual changes) | Detects drift and self-heals by reverting to Git |
Hard to audit who changed what | Uses Git history as the single source of truth |
CI pipelines need cluster credentials | Removes this risk — cluster pulls from Git |
Slow feedback on deployment status | Real-time UI with health, sync status, and diffs |
ArgoCD Concepts & Terminology :
Term | Description |
---|---|
Application | A group of Kubernetes resources as defined by a manifest. An application is a Custom Resource Definition, which represents a deployed application instance in a cluster. An application in ArgoCD is defined by two key pieces of information, the source Git repo where is the desired state of a kubernetes manifest and the destination for your Kubernetes resources where the resources should be deployed in a kubernetes . |
Application source type | The tool used to build the application. E.g., Helm, Kustomize, or Ksonnet. |
Project | Provides a logical grouping of applications, useful when Argo CD is used by multiple teams. |
Target state | The desired state of an application, as represented by files in a Git repository. |
Live state | The live state of that application. What pods, configmaps, secrets, etc. are created/deployed in a Kubernetes cluster. |
Sync status | Whether or not the live state matches the target state. Is the deployed application the same as Git says it should be? |
Sync | The process of making an application move to its target state (e.g., by applying changes to a Kubernetes cluster). |
Sync operation status | Whether or not a sync succeeded. |
Refresh | Compare the latest code in Git with the live state to figure out what is different. |
Health | The health of the application — is it running correctly? Can it serve requests? |
What is an Argo CD Application?
An Application is the core deployment unit in Argo CD.
It represents a set of Kubernetes resources defined in a Git repository and tells Argo CD:
- Where to get the manifests (Git repo + path + branch)
- Where to deploy them (cluster + namespace)
- How to manage them (sync policies like auto-sync, self-heal)
Creating an Application :
1.Using CLI
argocd app create color-app \ --repo https://github.com/sid/app-1.git \ --path team-a/color-app \ --dest-namespace color \ --dest-server https://kubernetes.default.svc
Explanation:
--repo: Git repository URL containing the manifests.
--path: Path inside the repo where the manifests are stored.
--dest-namespace: Target namespace in the cluster.
--dest-server: Target Kubernetes API server (usually the same cluster).
argocd login localhost:8080 --username admin --password <password> argocd app create solar-system-app \ --repo https://github.com/sidd-harth/gitops-argocd \ --path ./solar-system \ --dest-namespace solar-system \ --dest-server https://kubernetes.default.svc argocd app list argocd app sync solar-system-app argocd app get solar-system-app # check using kubectl with the namespace that argocd deployed in it kubectl get app -n gitops kubectl describe app -n gitops
2.Using a YAML manifest (color-app.yaml)
apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: color-app namespace: argocd spec: project: default source: repoURL: https://github.com/sid/app-1.git targetRevision: HEAD path: team-a/color destination: server: https://kubernetes.default.svc namespace: color syncPolicy: automated: selfHeal: true syncOptions: - CreateNamespace=true
Key fields:
- metadata.name: Application name shown in Argo CD UI.
- spec.source: Where manifests come from (repo, branch, path).
- spec.destination: Which cluster and namespace to deploy to.
- syncPolicy.automated: Enables auto-sync and self-healing.
- syncOptions.CreateNamespace=true: Automatically create the namespace if it doesn’t exist.
3.using UI
- Click “+ NEW APP”
- Application Name: A unique name (e.g. color-app)
- Project: Select default (unless you created custom projects)
-
Sync Policy:
- Choose Manual or Automatic
- Enable Self Heal and Prune if you want Argo CD to auto-fix drift and delete removed resources
-
Specify the Source (Git Repo)
- Repository URL or Added it using
settings => Repositories => Connect Repo => Via HTTP/HTTPS => Type: git => Name: Any Name => Project: default => Repository URL
then check the Connection Status - Revision: HEAD (or branch/tag name like main)
- Path: The path in the repo (./solar-system)
- Repository URL or Added it using
-
Specify the Destination (Cluster + Namespace)
- Cluster URL:
- Use https://kubernetes.default.svc for the same cluster Argo CD is running in
- Namespace: solar-system
- Check “Create Namespace” if it does not exist
- Cluster URL:
-
Review and Create
- Click “Create”
-
Sync the Application
- After creation, go to the app page
- Click “Sync” (if using manual sync)
- Argo CD will pull the manifests and deploy them to your cluster
ArgoCD Architecture :
Argo CD is deployed as a set of Kubernetes controllers and services inside your cluster.
It continuously pulls manifests from Git and applies them to the cluster, keeping everything in sync.
Component | Role |
---|---|
API Server | Provides the UI, CLI (argocd ), and REST API. Handles RBAC and authentication. |
Repository Server (Repo Server) | Clones Git repositories and renders manifests (Helm, Kustomize, etc.). |
Application Controller | Core GitOps reconciliation engine: compares desired vs live state and performs sync actions. |
Dex (Optional) | Identity provider for SSO authentication (OIDC, GitHub, LDAP, etc.). |
Redis (Optional) | Used as a cache to speed up repository and application state operations. |
Create ArgoCD Project :
When you install Argo CD, it automatically creates a built-in project called default :
- Has no restrictions by default
- Allows any Git repository as a source
- Allows deployments to any cluster and any namespace
- Allows all resource kinds (cluster-scoped and namespace-scoped)
❯❯❯ argocd proj list NAME DESCRIPTION DESTINATIONS SOURCES CLUSTER-RESOURCE-WHITELIST NAMESPACE-RESOURCE-BLACKLIST SIGNATURE-KEYS ORPHANED-RESOURCES default *,* * */* <none> <none> disabled
kubectl get appproj -n gitops # check project
we'll try to create a custom project which has some restrictions :
Creating your own AppProject lets you:
- Restrict which repos can be used
- Restrict which clusters/namespaces can be deployed to
- Apply RBAC per project
Using UI :
- Go to Settings → Projects
- Click “+ NEW PROJECT”
- Project Name and Description
- Create
- Source Repositories: only repos will be allowed.
- Destinations (CLUSTER + NAMESPACE)
ClusterRole (in the rbac.authorization.k8s.io API group) is restricted.
Any Application assigned to this Project that tries to create, update, or delete a ClusterRole will be blocked by Argo CD.
The sync will fail with a permission/validation error, and the ClusterRole resource will not be applied.
Section | What it means |
---|---|
Cluster Resource Allow List | A whitelist of cluster-scoped resources (apply to the whole cluster, not tied to a namespace) that applications are allowed to create/update/delete. If empty → all are allowed by default. |
Cluster Resource Deny List | A blacklist of cluster-scoped resources that applications are forbidden from managing. In your case, ClusterRole is listed here — meaning apps in this project cannot create or modify ClusterRoles. |
Namespace Resource Allow List | A whitelist of namespace-scoped resources (like Deployments, ConfigMaps, Services, etc.) that applications are allowed to manage. |
Namespace Resource Deny List | A blacklist of namespace-scoped resources that are forbidden. |
❯❯❯ argocd proj get test-project Name: test-project Description: for testing Destinations: https://kubernetes.default.svc,* Repositories: https://github.com/sidd-harth/gitops-argocd Scoped Repositories: <none> Allowed Cluster Resources: <none> Scoped Clusters: <none> Denied Namespaced Resources: <none> Signature keys: <none> Orphaned Resources: disabled
argocd proj get test-project -o yaml
metadata: creationTimestamp: "2025-09-13T18:25:20Z" generation: 5 managedFields: - apiVersion: argoproj.io/v1alpha1 fieldsType: FieldsV1 fieldsV1: f:spec: .: {} f:clusterResourceBlacklist: {} f:description: {} f:destinations: {} f:sourceRepos: {} f:status: {} manager: argocd-server operation: Update time: "2025-09-13T18:43:52Z" name: test-project namespace: gitops resourceVersion: "30359" uid: 829347ce-cb3d-46bb-8b1b-8bef6f5a9a56 spec: clusterResourceBlacklist: - group: '""' kind: ClusterRole description: for testing destinations: - name: in-cluster namespace: '*' server: https://kubernetes.default.svc sourceRepos: - https://github.com/sidd-harth/gitops-argocd status: {}
Applications in test-project can deploy anything from the specified Git repo to any namespace in the in-cluster cluster — except creating ClusterRole objects.
3. ArgoCD Intermediate
Reconciliation loop
1. timeout.reconciliation (TIMEOUT option) :
What it is:
A ConfigMap setting that defines the maximum duration of a single reconciliation loop.
A reconciliation loop is how often your ArgoCD application will synchronize from the Git repository :
- In a generic ArgoCD configuration, the default timeout period is set to 3 minutes.
- This value is configurable and is used within the ArgoCD repo server.
- The ArgoCD repo server is responsible to retrieve the desired state from the Git repo server and it has a timeout option called as the application reconciliation timeout.
- If we check the environment variable of ArgoCD repo server pod, it says that the key timeout reconciliation can be configured on an ArgoCD config map.
kubectl -n gitops describe pod argocd-repo-server-cd79f5cc4-lvvgp | grep -i "ARGOCD_RECONCILIATION_TIMEOUT:" -B1 kubectl get cm argocd-cm -n gitops -o yaml | grep -i timeout # kubectl patch command to update the timeout.reconciliation value in the argocd-cm kubectl -n gitops patch configmap argocd-cm --patch='{"data":{"timeout.reconciliation":"300s"}}' kubectl -n gitops rollout restart deploy/argocd-repo-server
The argocd-repo-server
takes the config value from argocd-cm
:
❯❯❯ kubectl -n gitops describe pod argocd-repo-server-cd79f5cc4-lvvgp | grep -i "ARGOCD_RECONCILIATION_TIMEOUT:" -B1 ARGOCD_REPO_SERVER_NAME: argocd-repo-server ARGOCD_RECONCILIATION_TIMEOUT: <set to the key 'timeout.reconciliation' of config map 'argocd-cm'> Optional: true
❯❯❯ k get cm argocd-cm -n gitops -o yaml | grep -i timeout.reconciliation timeout.reconciliation: 180s
2. Webhook (GIT WEBHOOK option) :
What it is:
An event trigger that tells Argo CD to immediately start a reconciliation loop when new commits are pushed.
Where configured:
In your Git hosting platform (GitHub/GitLab/Bitbucket etc.)
You add a webhook pointing to Argo CD’s API:
https://<argocd-server>/api/webhook
kubectl -n gitops rollout restart deploy/argocd-repo-server
Effect:
Instead of waiting for the default polling interval (3 min), Argo CD instantly sees new commits and starts reconciliation.
By default, the argocd-server component only serves HTTPS (TLS) on port 443/8080.
When your Gitea webhook points to http://.../api/webhook instead of https://..., Argo CD will reject it (because it expects TLS), won’t accept it without valid HTTPS.
The --insecure
flag tells argocd-server: “Serve plain HTTP instead of HTTPS.”
kubectl edit -n gitops deployments.apps argocd-server
containers: - args: - /usr/local/bin/argocd-server - --insecure # add this line
❯❯❯ k get po -n gitops argocd-server-554fb76c44-ck57g ⎈ (kind-kind/solar-system) NAME READY STATUS RESTARTS AGE argocd-server-554fb76c44-ck57g 1/1 Running 0 8m35s
kubectl port-forward svc/argocd-server --address=0.0.0.0 -n gitops 8080:80
- Create a New App in ArgoCD
- Now, when you push a new commit, Argo will show an
Out of Sync
status for this App.
Application health
Status | Meaning |
---|---|
🟢 Healthy | All resources are 100% healthy |
🔵 Progressing | Resource is unhealthy, but could still be healthy given time |
💗 Degraded | Resource status indicates a failure or an inability to reach a healthy state |
🟡 Missing | Resource is not present in the cluster |
🟣 Suspended | Resource is suspended or paused. Typical example is a paused Deployment |
⚪ Unknown | Health assessment failed and actual health status is unknown |
Sync Strategies
Feature | Description |
---|---|
Manual or automatic sync | If set to automatic, Argo CD will apply the changes then update or create new resources in the target Kubernetes cluster. |
Auto-pruning of resources | Auto-pruning feature describes what happens when files are deleted or removed from Git. |
Self-Heal of cluster | Self-heal defines what Argo CD does when you make kubectl edit changes directly to the cluster. |
Declarative Setup - Mono Application
Mono-application = one Application object tracks exactly one app path.
Keep app config declarative in Git; Argo CD watches it and reconciles automatically (if automated enabled).
apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: geocentric-model-app namespace: argocd finalizers: - resources-finalizer.argocd.argoproj.io spec: project: default source: repoURL: http://165.22.209.118:3000/siddharth/gitops-argocd.git targetRevision: HEAD path: ./declarative/manifests/geocentric-model # this path contains deployment and svc destination: server: https://kubernetes.default.svc namespace: geocentric-model syncPolicy: syncOptions: - CreateNamespace=true automated: prune: true selfHeal: true
kubectl apply -f filename.yaml
App of Apps
What is App of Apps?
- Mono-app = one Application → one set of manifests.
- App of Apps = one parent/root Application points to a folder containing multiple Application YAMLs.
- Argo CD applies the root app → which creates all the child apps.
- You only need to manually create the root application — all other apps are bootstrapped from Git.
Example repo structure :
gitops-repo/ └─ root/ ├─ app-of-apps.yaml # the parent Argo CD Application ├─ apps/ │ ├─ frontend.yaml # child app definition │ ├─ backend.yaml # child app definition │ └─ database.yaml # child app definition ├─ frontend/ # actual manifests (Helm/Kustomize/YAML) │ ├─ deployment.yaml │ └─ service.yaml ├─ backend/ └─ database/
Parent Application (app-of-apps.yaml) :
apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: root-app namespace: argocd spec: project: default source: repoURL: https://github.com/your-org/gitops-repo.git targetRevision: main path: root/apps # folder that holds child App YAMLs directory: recurse: true # include all files recursively destination: server: https://kubernetes.default.svc namespace: argocd syncPolicy: automated: selfHeal: true prune: true syncOptions: - CreateNamespace=true
One of the child applications (frontend.yaml) :
apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: frontend namespace: argocd spec: project: default source: repoURL: https://github.com/your-org/gitops-repo.git targetRevision: main path: root/frontend destination: server: https://kubernetes.default.svc namespace: frontend syncPolicy: automated: selfHeal: true prune: true syncOptions: - CreateNamespace=true
How it works:
1.Apply the root app:
kubectl apply -f root/app-of-apps.yaml -n argocd
2.Argo CD syncs the root app.
3.Root app creates the child Application objects from Git.
4.Each child app manages its own manifests as if it were a standalone app.
Deploy apps using HELM Chart
1. Deploy Using My Helm Chart :
❯❯❯ cd gitops-argocd/ ⎈ (kind-kind/solar-system) [I] [~/gitops-argocd] (22:34:45) main ❯❯❯ ls | grep config ⎈ (kind-kind/solar-system) 📂 declarative 📂 health-check 📂 helm-chart 📂 jenkins-demo 📄 LICENSE 📂 nginx-app 📂 sealed-secret 📂 solar-system 📂 vault-secrets [I] [~/gitops-argocd] (22:34:46) main ❯❯❯ cd helm-chart/ ⎈ (kind-kind/solar-system) [I] [~/g/helm-chart] (22:34:52) main ❯❯❯ cd templates/ ⎈ (kind-kind/solar-system) [I] [~/g/h/templates] (22:34:56) main ❯❯❯ pwd ⎈ (kind-kind/solar-system) /home/poseidon/gitops-argocd/helm-chart/templates [I] [~/g/h/templates] (22:34:58) main ❯❯❯ ls ⎈ (kind-kind/solar-system) 📄 _helpers.tpl 📄 configmap.yaml 📄 deployment.yaml 📄 NOTES.txt 📄 service.yaml [I] [~/g/h/templates] (22:35:00) main ❯❯❯ cat deployment.yaml ⎈ (kind-kind/solar-system) apiVersion: apps/v1 kind: Deployment metadata: name: {{ .Release.Name }}-deploy labels: {{- include "random-shapes.labels" . | nindent 4 }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: {{- include "random-shapes.selectorLabels" . | nindent 6 }} template: metadata: {{- with .Values.podAnnotations }} annotations: {{- toYaml . | nindent 8 }} {{- end }} labels: {{- include "random-shapes.selectorLabels" . | nindent 8 }} spec: containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }}" imagePullPolicy: {{ .Values.image.pullPolicy }} envFrom: - configMapRef: name: {{ .Release.Name }}-configmap [I] [~/g/h/templates] (22:35:04) main ❯❯❯ cat service.yaml ⎈ (kind-kind/solar-system) apiVersion: v1 kind: Service metadata: name: {{ .Release.Name }}-service labels: {{- include "random-shapes.labels" . | nindent 4 }} spec: type: {{ .Values.service.type }} ports: - port: {{ .Values.service.port }} targetPort: {{ .Values.service.targetPort }} protocol: TCP name: http selector: {{- include "random-shapes.selectorLabels" . | nindent 4 }} [I] [~/g/h/templates] (22:35:07) main ❯❯❯ cd .. ⎈ (kind-kind/solar-system) [I] [~/g/helm-chart] (22:35:12) main ❯❯❯ ls ⎈ (kind-kind/solar-system) 📄 Chart.yaml 📂 templates 📄 values.yaml [I] [~/g/helm-chart] (22:35:13) main ❯❯❯ cat values.yaml ⎈ (kind-kind/solar-system) replicaCount: 1 image: repository: siddharth67/php-random-shapes:v1 pullPolicy: IfNotPresent # Overrides the image tag whose default is the chart appVersion. tag: "" imagePullSecrets: [] nameOverride: "" fullnameOverride: "" service: type: ClusterIP port: 80 targetPort: 80 color: circle: black oval: black triangle: black rectangle: black square: black [I] [~/g/helm-chart] (22:35:17) main ❯❯❯ cat Chart.yaml ⎈ (kind-kind/solar-system) apiVersion: v2 name: random-shapes-chart description: A Helm chart for Random Shape App version: 1.0.0
argocd app create helm-random-shapes \ --repo https://github.com/sidd-harth/gitops-argocd \ --path helm-chart \ --helm-set replicaCount=2 \ --helm-set color.circle=pink \ --helm-set color.square=green \ --helm-set service.type=ClusterIP \ --dest-namespace default \ --dest-server https://kubernetes.default.svc
❯❯❯ argocd app create helm-random-shapes \ ⎈ (kind-kind/solar-system) --repo https://github.com/sidd-harth/gitops-argocd \ --path helm-chart \ --helm-set replicaCount=2 \ --helm-set color.circle=pink \ --helm-set color.square=green \ --helm-set service.type=ClusterIP \ --dest-namespace default \ --dest-server https://kubernetes.default.svc application 'helm-random-shapes' created [I] [~/gitops] (22:32:01) ❯❯❯ h ls ⎈ (kind-kind/solar-system) NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION [I] [~/gitops] (22:32:09) ❯❯❯ argocd app get helm-random-shapes ⎈ (kind-kind/solar-system) Name: gitops/helm-random-shapes Project: default Server: https://kubernetes.default.svc Namespace: default URL: https://argocd.example.com/applications/helm-random-shapes Source: - Repo: https://github.com/sidd-harth/gitops-argocd Target: Path: helm-chart SyncWindow: Sync Allowed Sync Policy: Manual Sync Status: OutOfSync from (05e2501) Health Status: Missing GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE ConfigMap default helm-random-shapes-configmap OutOfSync Missing Service default helm-random-shapes-service OutOfSync Missing apps Deployment default helm-random-shapes-deploy OutOfSync Missing [I] [~/gitops] (22:32:20) ❯❯❯ argocd app get helm-random-shapes ⎈ (kind-kind/solar-system) Name: gitops/helm-random-shapes Project: default Server: https://kubernetes.default.svc Namespace: default URL: https://argocd.example.com/applications/helm-random-shapes Source: - Repo: https://github.com/sidd-harth/gitops-argocd Target: Path: helm-chart SyncWindow: Sync Allowed Sync Policy: Manual Sync Status: Synced to (05e2501) Health Status: Healthy GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE ConfigMap default helm-random-shapes-configmap Synced configmap/helm-random-shapes-configmap created Service default helm-random-shapes-service Synced Healthy service/helm-random-shapes-service created apps Deployment default helm-random-shapes-deploy Synced Healthy deployment.apps/helm-random-shapes-deploy created [I] [~/gitops] (22:32:51) ❯❯❯
2. Deploy Using Bitnami Helm Charts :
Add Bitnami Helm Charts Repo :
Create App :
You can also modify all the parameters : Change service.type
from LoadBalancer
to ClusterIP
:
❯❯❯ k get all -n bitnami ⎈ (kind-kind/solar-system) NAME READY STATUS RESTARTS AGE pod/bitnami-helm-nginx-app-7bc9fc68c9-np9t7 0/1 Init:0/1 0 46s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/bitnami-helm-nginx-app ClusterIP 10.96.61.98 <none> 80/TCP,443/TCP 47s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/bitnami-helm-nginx-app 0/1 1 0 46s NAME DESIRED CURRENT READY AGE replicaset.apps/bitnami-helm-nginx-app-7bc9fc68c9 1 1 0 46s
4. ArgoCD Advanced
How ArgoCD manages role-based access control.
RBAC Architecture in Argo CD :
-
Users or Groups
- Authenticated identities are then mapped to RBAC roles.
-
Roles
- Logical sets of permissions.
- Defined in the argocd-rbac-cm ConfigMap (in the argocd namespace).
- Each role can contain multiple policy rules.
-
Policies (Rules)
- Define what actions a role can perform on what resources.
- Written in the format:
p, <role>, <resource>, <action>, <object>, <effect>
resource → what kind of object (applications, projects, repositories, clusters…)
action → get, create, update, delete, sync, override, etc.
object → * (all) or specific names
- Role Bindings
- Map users or groups to roles:
g, <username_or_group>, <role>
Give user jai permission to create clusters :
kubectl -n argocd patch configmap argocd-rbac-cm \ --patch='{"data":{"policy.csv":"p, role:create-cluster, clusters, create, *, allow\ng, jai, role:create-cluster"}}' # Test it: argocd account can-i create clusters '*' # should print: yes # (when logged in as jai) argocd account can-i delete clusters '*' # should print: no # (when logged in as jai)
Give user ali permission to manage applications in kia-project :
kubectl -n argocd patch configmap argocd-rbac-cm \ --patch='{"data":{"policy.csv":"p, role:kia-admins, applications, *, kia-project/*, allow\ng, ali, role:kia-admins"}}' # Test it: argocd account can-i sync applications kia-project/* # should print: yes # (when logged in as ali)
Notes
- p = policy (what a role can do)
- g = group binding (who gets the role)
- argocd account can-i ... tests permissions for the currently logged in user
- After patching, Argo CD automatically reloads the RBAC config (no restart needed)
User Management
- The default ArgoCD installation has one built-in admin user that has full access to the system and is a super user.
- It is advised to only utilize the admin user for initial settings and then disable it after adding all required users.
❯❯❯ argocd account list ⎈ (kind-kind/solar-system) NAME ENABLED CAPABILITIES admin true login
- New users can be created and used for various roles.
- New users can be defined using ArgoCD ConfigMap.
- We edit the ArgoCD ConfigMap and add
accounts.username
record. - Each user can be associated with two capabilities, API key and login.
# Creates two local accounts: jai and ali kubectl -n argocd patch configmap argocd-cm \ --patch='{"data":{"accounts.jai": "apiKey,login"}}' kubectl -n argocd patch configmap argocd-cm \ --patch='{"data":{"accounts.ali": "apiKey,login"}}'
Grants them:
- login → allows logging in via UI/CLI
- apiKey → API key allows generating JSON Web Token authentication for APl access.
- After this, you can bind them to RBAC roles using g, jai, role:... in the argocd-rbac-cm.
❯❯❯ kubectl -n gitops patch configmap argocd-cm \ --patch='{"data":{"accounts.jai": "apiKey,login"}}' configmap/argocd-cm patched ❯❯❯ kubectl -n gitops patch configmap argocd-cm \ --patch='{"data":{"accounts.ali": "apiKey,login"}}' configmap/argocd-cm patched ❯❯❯ argocd account list NAME ENABLED CAPABILITIES admin true login ali true apiKey, login jai true apiKey, login
❯❯❯ argocd account update-password --account jai *** Enter password of currently logged in user (admin): *** Enter new password for user jai: *** Confirm new password for user jai: Password updated
- ArgoCD has two predefined default roles,
read-only
andadmin
.-
The read-only role
provides read-only access to all the resources, whereasthe admin role
provides unrestricted access to all resources. - And by default, the admin user is assigned to the admin role.
- We can modify this and can also assign custom roles to users by editing the ArgoCD RBAC Config Map.
-
# set the default role in Argo CD RBAC to readonly kubectl -n argocd patch configmap argocd-rbac-cm \ --patch='{"data":{"policy.default": "role:readonly"}}'
In this example, we are patching the ArgoCD RBAC ConfigMap by assigning a default read-only role to any user who is not mapped to a specific role.
Bitnami Sealed Secrets
Summary of Flow :
- Create normal Secret
- Encrypt with kubeseal → SealedSecret
- Commit to Git
- Argo CD syncs it
-
sealed-secrets-controller
decrypts to a real Secret
Step 1 — Create a normal Kubernetes Secret manifest
kubectl create secret generic mysql-password \ --from-literal=password='s1Ddh@rt#' \ --dry-run=client -o yaml > mysql-password_k8s-secret.yaml
This produces a file like:
apiVersion: v1 kind: Secret metadata: name: mysql-password data: password: czF1RGRoQHJ0Iw==
Step 2 — Install the Sealed Secrets controller (via Argo CD)
argocd app create sealed-secrets \ --repo https://bitnami-labs.github.io/sealed-secrets \ --helm-chart sealed-secrets \ --revision 2.2.0 \ --dest-server https://kubernetes.default.svc \ --dest-namespace kube-system # then sync argocd app sync sealed # or argocd app create sealed \ --repo https://bitnami-labs.github.io/sealed-secrets \ --helm-chart sealed-secrets \ --revision 2.2.0 \ --dest-server https://kubernetes.default.svc \ --dest-namespace sealed \ --sync-option CreateNamespace=true \ --sync-policy automated kubectl get all -n sealed kubectl -n sealed get deploy,sealedsecret,po,svc
Step 3 — Get the public certificate from the controller
# Export the cluster’s sealing public key: kubectl -n sealed get secret \ -l sealedsecrets.bitnami.com/sealed-secrets-key=active \ -o jsonpath='{.items[0].data.tls\.crt}' \ | base64 -d > sealedSecret.crt
Step 4 — Install the kubeseal CLI locally
wget https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.18.0/kubeseal-0.18.0-linux-amd64.tar.gz -O kubeseal.tar.gz tar -xzf kubeseal.tar.gz sudo install -m 755 kubeseal /usr/local/bin/kubeseal
Step 5 — Seal your secret
# Use kubeseal to convert your Secret into a SealedSecret: kubeseal -o yaml \ --scope cluster-wide \ --cert sealedSecret.crt \ < mysql-password_k8s-secret.yaml \ > mysql-password_sealed-secret.yaml
Now you have an encrypted SealedSecret manifest.
Step 6 — Commit the SealedSecret to Git (Argo CD watches it)
- Push
mysql-password_sealed-secret.yaml
to your Git repo along with yourdeployment.yaml
. - Argo CD will sync and apply it.
- The
sealed-secrets-controller
pod will decrypt it and create the realSecret
.
Step 7 — Verify the decrypted Secret
kubectl get secret mysql-password -o yaml
You should see the real base64-encoded password.
Hashicorp Vault
1. ArgoCD Vault Plugin CLI :
Step 1 -- Create Vault App in ArgoCD
❯❯❯ k get all -n vault-demo-gitops NAME READY STATUS RESTARTS AGE pod/vault-app-0 0/1 Running 0 2m2s pod/vault-app-agent-injector-67ff69f54-8dxqc 1/1 Running 0 2m3s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/vault-app ClusterIP 10.96.132.44 <none> 8200/TCP,8201/TCP 2m3s service/vault-app-agent-injector-svc ClusterIP 10.96.201.162 <none> 443/TCP 2m3s service/vault-app-internal ClusterIP None <none> 8200/TCP,8201/TCP 2m3s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/vault-app-agent-injector 1/1 1 1 2m3s NAME DESIRED CURRENT READY AGE replicaset.apps/vault-app-agent-injector-67ff69f54 1 1 1 2m3s NAME READY AGE statefulset.apps/vault-app 0/1 2m2s ❯❯❯ k get po -n vault-demo-gitops NAME READY STATUS RESTARTS AGE vault-app-0 0/1 Running 0 2m10s vault-app-agent-injector-67ff69f54-8dxqc 1/1 Running 0 2m11s
Step 2 --- Unsealed Process
# Access Vault UI kubectl -n vault-demo-gitops port-forward svc/vault-app 8200:8200 --address=0.0.0.0 # From UI: # Key shares: 3 # Key threshold: 2 # Then Download Keys (root key and 3 sealed keys) , then UI will ask you about Unseal Keys and root key
Now: vault-app-0 will be 1/1 Ready state
❯❯❯ k get po -n vault-demo-gitops NAME READY STATUS RESTARTS AGE vault-app-0 1/1 Running 0 10m vault-app-agent-injector-67ff69f54-8dxqc 1/1 Running 0 10m
From UI: Enable new engine => KV => Next => Path: Credentials => Enable Engine => Create Secret => Path for this secret: app => Secret Data => Enter Random Vaules(username , password , apikey) => Save Now From Secrets, we can see credentials and from three dots => details , we can see path /credentials
Step 3 -- Create Secret with Template Syntax: vault.yaml
kind: Secret apiVersion: v1 metadata: name: app-crds annotations: avp.kubernetes.io/path: "credentials/data/app" type: Opaque stringData: apikey: <apikey> username: <username> password: <password>
Step 4 -- Install ArgoCD Vault Plugin
curl -L -o argocd-vault-plugin https://github.com/argoproj-labs/argocd-vault-plugin/releases/download/v1.17.0/argocd-vault-plugin_1.17.0_linux_amd64 chmod +x argocd-vault-plugin sudo mv argocd-vault-plugin /usr/local/bin/ # Verify installation argocd-vault-plugin version
Step 5 -- Create vault.env
VAULT_ADDR=http://localhost:8200 VAULT_TOKEN=<root token> AVP_TYPE=vault AVP_AUTH_TYPE=token
argocd-vault-plugin generate -c vault.env - < vault.yaml
will see as shown below :
apiVersion: v1 kind: Secret metadata: annotations: avp.kubernetes.io/path: credentials/data/app name: app-crds stringData: apikey: sdfshifsdifj596211af password: root9289 username: omar type: Opaque
❯❯❯ cat vault.yaml kind: Secret apiVersion: v1 metadata: name: app-crds annotations: avp.kubernetes.io/path: "credentials/data/app" type: Opaque stringData: apikey: <apikey> username: <username> password: <password>
2. ArgoCD with ArgoCD Vault Plugin :
we need to patch argocd-repo-server
deployment and argocd-cm
config map :
k edit -n gitops deployments.apps argocd-repo-server
apiVersion: apps/v1 kind: Deployment metadata: name: argocd-repo-server spec: template: spec: containers: - name: argocd-repo-server volumeMounts: - name: custom-tools mountPath: /usr/local/bin/argocd-vault-plugin subPath: argocd-vault-plugin # Note: AVP config (for the secret manager, etc) can be passed in several ways. This is just one example # https://argocd-vault-plugin.readthedocs.io/en/stable/config/ envFrom: - secretRef: name: argocd-vault-plugin-credentials volumes: - name: custom-tools emptyDir: {} initContainers: - name: download-tools image: alpine:3.8 command: [sh, -c] # Don't forget to update this to whatever the stable release version is # Note the lack of the `v` prefix unlike the git tag env: - name: AVP_VERSION value: "1.7.0" args: - >- wget -O argocd-vault-plugin https://github.com/argoproj-labs/argocd-vault-plugin/releases/download/v${AVP_VERSION}/argocd-vault-plugin_${AVP_VERSION}_linux_amd64 && chmod +x argocd-vault-plugin && mv argocd-vault-plugin /custom-tools/ volumeMounts: - mountPath: /custom-tools name: custom-tools # Not strictly necessary, but required for passing AVP configuration from a secret and for using Kubernetes auth to Hashicorp Vault automountServiceAccountToken: true
k edit -n gitops cm argocd-cm
data: configManagementPlugins: |- - name: argocd-vault-plugin generate: command: ["argocd-vault-plugin"] args: ["generate", "./"]
k rollout restart deploy argocd-repo-server
From UI: Create App => URL: https://github.com/sidd-harth/gitops-argocd path: ./vault-secrets under the namespace option , we will find Directory Icon, switch from Directory to Plugin => Name: argocd-vault-plugin => Env: VAULT_ADDR=http://vault-app.vault-demo.svc.cluster.local:8200 VAULT_TOKEN=<root token> AVP_TYPE=vault AVP_AUTH_TYPE=token => Create
# to test k -n vault-secret get secrets -o yaml
Top comments (0)