I’m a DevOps engineer and in this blog I’ll get you from zero to a working local cluster, deploy an app with raw YAML, then switch to Helm—plus the mental models and commands you’ll reuse daily.
Executive Summary
- Stand up a local Kubernetes cluster (kind/minikube) and verify it with
kubectl. - Cement a mental model of Pods → ReplicaSets → Deployments → Services → Controllers.
- Apply clean YAML (Deployment + Service), then Helm-ify it with a minimal chart.
- Learn the 12 commands I actually use day-to-day (kubectl + Helm).
- Practice pitfall recovery (images won’t pull, pending pods, bad Services, context issues).
Prereqs
- minikube
- kind
- Docker running (required by kind & often by minikube).
- kubectl (v1.28+ recommended)
- helm
Here’s the updated Install Hints section starting numbering from 1 instead of 0:
Install Hints (macOS, Linux, Windows)
1. Docker (required for kind & often for minikube)
- macOS (with Homebrew):
brew install --cask docker open /Applications/Docker.app Tip: Ensure Docker Desktop is running before creating clusters.
- Linux (Debian/Ubuntu):
sudo apt-get update sudo apt-get install -y ca-certificates curl gnupg lsb-release sudo mkdir -p /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin # Add your user to docker group (log out/in after) sudo usermod -aG docker $USER - Windows (PowerShell as Admin):
choco install docker-desktop Tip: After install, restart and launch Docker Desktop.
2. kubectl
- macOS:
brew install kubectl - Linux (Debian/Ubuntu):
sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /usr/share/keyrings/kubernetes-archive-keyring.gpg echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt-get install -y kubectl - Windows (PowerShell as Admin):
choco install kubernetes-cli 3. kind (Kubernetes in Docker)
- macOS:
brew install kind - Linux:
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.22.0/kind-linux-amd64 chmod +x ./kind sudo mv ./kind /usr/local/bin/kind - Windows (PowerShell as Admin):
choco install kind 4. minikube
- macOS:
brew install minikube - Linux:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 sudo install minikube-linux-amd64 /usr/local/bin/minikube - Windows (PowerShell as Admin):
choco install minikube 5. Helm
- macOS:
brew install helm - Linux:
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash - Windows (PowerShell as Admin):
choco install kubernetes-helm Verify Installations
kubectl version --client kind version minikube version helm version docker --version Concepts & Skills
1) Kubernetes Mental Model
Definition: Kubernetes reconciles desired state (your specs) to actual state (running Pods) via controllers. It explains everything: you ask for N replicas, deployments manage ReplicaSets which manage Pods; Services route to healthy Pod endpoints.
Best practices:
- Treat Pods as ephemeral; deploy via Deployments, not bare Pods.
- Scale & roll out via Deployments; don’t hand-edit Pods.
- Expose traffic via Services; keep labels/selectors consistent.
- Let controllers do the work; think “declare, don’t script.”
Commands you’ll use:
kubectl get deploy,rs,pods,svc -A kubectl describe deploy <name> kubectl rollout status deploy/<name> kubectl scale deploy/<name> --replicas=3 Before → After
# Before: single Pod (fragile) apiVersion: v1 kind: Pod metadata: { name: hello } spec: { containers: [{ name: app, image: nginx:1.25 }] } # After: Deployment + Service (managed, scalable) apiVersion: apps/v1 kind: Deployment metadata: { name: hello } spec: replicas: 2 selector: { matchLabels: { app: hello } } template: metadata: { labels: { app: hello } } spec: containers: [{ name: app, image: nginx:1.25 }] --- apiVersion: v1 kind: Service metadata: { name: hello } spec: type: ClusterIP selector: { app: hello } ports: [{ port: 80, targetPort: 80 }] Decision cues (when to use):
- Deployment for stateless apps, rolling updates.
- StatefulSet for ordered/identity-bound Pods (DBs).
- DaemonSet for per-node agents.
- Job/CronJob for finite/recurring work.
2) YAML Hygiene
Definition: Kubernetes resources are structured YAML: apiVersion, kind, metadata, spec. 90% of “Why won’t it work?” is indentation, wrong apiVersion, or misplaced fields.
Best practices:
- Keep a stable skeleton:
apiVersion/kind/metadata/spec. - Use 2 spaces; never tabs.
- Verify with
kubectl explainandkubectl apply --dry-run=client -f. - Prefer labels (e.g.,
app: hello) for selectors; avoid ad-hoc names. - Pin images (e.g.,
nginx:1.25) to avoid surprise upgrades.
Helpful commands:
kubectl explain deployment.spec --recursive | less kubectl apply --dry-run=client -f hello.yaml Before → After
# Before: wrong apiVersion, mixed tabs apiVersion: apps/v2 kind: Deployment metadata: name: hello # <-- tab! spec: {} # After: correct and clean apiVersion: apps/v1 kind: Deployment metadata: name: hello spec: replicas: 1 selector: { matchLabels: { app: hello } } template: metadata: { labels: { app: hello } } spec: containers: [{ name: app, image: nginx:1.25 }] Decision cues:
- Use
kubectl explainif you’re guessing a field. - Use
--dry-runfor fast validation before real apply.
3) kubectl Basics
Definition: kubectl is the CLI to talk to the API server using your current context/namespace. Wrong context/namespace is the #1 cause of “it’s not found.”
Best practices:
- Set a default namespace per context.
- Use wide output and labels in
get. - Rely on
describeand events to debug. - Use
-o yamlto see server-filled fields. - Use kubeconfig contexts; don’t point prod by accident.
Commands:
kubectl config get-contexts kubectl config set-context --current --namespace=dev kubectl get pods -o wide kubectl describe pod <pod> kubectl get events --sort-by=.lastTimestamp Before → After
# Before: implicit default namespace (surprise!) kubectl get pods # After: explicit context & namespace kubectl config set-context --current --namespace=dev kubectl get pods -n dev Decision cues:
- If a resource “disappears,” check namespace.
- If a command “hangs,” check context/cluster.
4) Local Cluster: kind or minikube
Definition: kind runs Kubernetes in Docker containers; minikube runs a local Kubernetes VM/container. Fast, reproducible clusters for development and demos.
Best practices:
- Use kind for simple, Docker-backed clusters.
- Use minikube for addons/ingress and driver flexibility.
- Name clusters per project (
kind create cluster --name demo). - Export kubeconfig only for the current shell (avoid prod collisions).
Commands:
# kind kind create cluster --name k90 kubectl cluster-info kubectl get nodes # minikube minikube start minikube status kubectl get nodes Before → After
# Before: ad-hoc envs, “works on my machine” docker run -p 8080:80 nginx # After: real cluster parity locally kind create cluster --name k90 kubectl apply -f k8s/hello.yaml Decision cues:
- kind if you already live in Docker land & want speed.
- minikube if you need addons and drivers (HyperKit, Docker, etc.).
5) Helm Basics
Definition: Helm is Kubernetes’ package manager: it templatizes YAML, versioned as charts, and then installed as releases. Stop copy-pasting YAML; manage environments, values, upgrades, and rollbacks cleanly.
Best practices:
- Keep charts minimal; prefer a small set of templates.
- Parameterize only what changes across envs.
- Validate with
helm lintand render withhelm template. - Track releases with
helm listand rollback confidently.
Commands:
helm create hello helm lint hello helm template hello helm install hello ./hello -n dev --create-namespace helm upgrade hello ./hello -f values-dev.yaml helm rollback hello 1 Before → After
# Before: multiple hand-maintained YAML files per env kubectl apply -f dev/hello.yaml kubectl apply -f prod/hello.yaml # After: one chart, many values helm install hello ./hello -f values-dev.yaml helm upgrade hello ./hello -f values-prod.yaml Decision cues:
- Use raw YAML to learn and for tiny one-offs.
- Use Helm once you need env variants, upgrades, teams, reuse.
Diagrams
Architecture (request → Service → Pods)
Control Plane Path (sequence)
Hands-on Mini-Lab (20–30 min)
Goal: Create a cluster, deploy “hello” with raw YAML, then with Helm; compare manifests.
1) Create a local cluster
kind create cluster --name k90 kubectl cluster-info kubectl get nodes kubectl config set-context --current --namespace=dev (macOS: if bash errors, run in zsh or install newer bash with Homebrew.)
2) Raw YAML: Deployment + Service
Create k8s/hello.yaml:
apiVersion: apps/v1 kind: Deployment metadata: name: hello labels: { app: hello } spec: replicas: 2 selector: { matchLabels: { app: hello } } template: metadata: { labels: { app: hello } } spec: containers: - name: app image: nginx:1.25 ports: [{ containerPort: 80 }] --- apiVersion: v1 kind: Service metadata: name: hello labels: { app: hello } spec: type: ClusterIP selector: { app: hello } ports: - name: http port: 80 targetPort: 80 Apply and verify:
kubectl apply -f k8s/hello.yaml kubectl rollout status deploy/hello kubectl get svc hello -o wide kubectl get pods -l app=hello -o wide Port-forward to test:
kubectl port-forward svc/hello 8080:80 # open http://localhost:8080 3) Helm-ify it
Create a chart and trim it down:
helm create hello-chart # Keep only templates/deployment.yaml and templates/service.yaml; delete extras like hpa, serviceaccount, tests. hello-chart/values.yaml (minimal):
replicaCount: 2 image: repository: nginx tag: "1.25" pullPolicy: IfNotPresent service: type: ClusterIP port: 80 labels: app: hello hello-chart/templates/deployment.yaml:
apiVersion: apps/v1 kind: Deployment metadata: name: {{ include "hello-chart.fullname" . }} labels: {{- include "hello-chart.labels" . | nindent 4 }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app: {{ .Values.labels.app }} template: metadata: labels: app: {{ .Values.labels.app }} {{- include "hello-chart.labels" . | nindent 8 }} spec: containers: - name: app image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} ports: - containerPort: 80 hello-chart/templates/service.yaml:
apiVersion: v1 kind: Service metadata: name: {{ include "hello-chart.fullname" . }} labels: {{- include "hello-chart.labels" . | nindent 4 }} spec: type: {{ .Values.service.type }} selector: app: {{ .Values.labels.app }} ports: - name: http port: {{ .Values.service.port }} targetPort: 80 Render & compare with your raw YAML:
helm template hello ./hello-chart > rendered.yaml diff -u k8s/hello.yaml rendered.yaml || true Install and test:
helm install hello ./hello-chart -n dev --create-namespace kubectl get all -l app=hello kubectl port-forward svc/hello 8080:80 Upgrade and rollback:
# Bump replicas via values helm upgrade hello ./hello-chart --set replicaCount=3 helm history hello helm rollback hello 1 # back to first revision Cheatsheet Table (Top 12)
| Command | What it does |
|---|---|
kubectl config get-contexts | List kubeconfig contexts; know where you’re pointed. |
kubectl config set-context --current --namespace=dev | Set default namespace for current context. |
kubectl get pods -o wide | Show Pods with node/IP; quick health snapshot. |
kubectl describe pod/<name> | Deep dive into events, containers, reasons for failures. |
kubectl get events --sort-by=.lastTimestamp | Recent cluster events for fast debugging. |
kubectl apply -f file.yaml | Declaratively create/update resources. |
kubectl rollout status deploy/<name> | Watch rollout until success/fail. |
kubectl logs deploy/<name> -f | Stream app logs from all Pods in the Deployment. |
kubectl port-forward svc/<name> 8080:80 | Access ClusterIP services from localhost. |
helm template <rel> <chart> | Render manifests locally (no cluster changes). |
helm install <rel> <chart> -f values.yaml | Install a chart as a named release. |
helm upgrade --install <rel> <chart> | Idempotent deploy; create or upgrade in one. |
Pitfalls & Recovery
ImagePullBackOff/ErrImagePull: Repository or tag wrong, or no registry creds.
Fix:kubectl describe pod, verifyimage:; trydocker pulllocally; add imagePullSecret if private.Pods stuck
Pending: No schedulable nodes, resource requests too high, or PVC issues.
Fix:kubectl describe pod; checkkubectl get nodes; reduceresources.requests; with minikube,minikube addons enable storage.Service not routing:
selectordoesn’t match Pod labels ortargetPortmismatch.
Fix: Comparespec.selectortopod.metadata.labels; aligntargetPortwith containerPort.Wrong context/namespace: Resources “missing.”
Fix:kubectl config current-context;kubectl get ns; set proper namespace.Helm upgrade fails (immutable fields): Some fields (e.g.,
spec.clusterIP) can’t change in-place.
Fix:helm diffto see changes; for Services, preserveclusterIP; otherwisehelm uninstalland re-install.RBAC forbidden: In restricted clusters, applies fail.
Fix: Ask for right Role/RoleBinding; test withkubectl auth can-i.Tabs/indentation in YAML: Parsing errors or ignored fields.
Fix: Convert tabs to spaces; validate withkubectl apply --dry-run=client -f.
Quick Bash (≥4) Scriptlets — Point‑wise Explanation
Below is a line‑by‑line breakdown of the helper script that creates or deletes a local kind cluster and sets a default namespace.
#!/usr/bin/env bash set -euo pipefail CLUSTER=${1:-k90} case "${2:-up}" in up) kind create cluster --name "$CLUSTER" kubectl config set-context --current --namespace=dev ;; down) kind delete cluster --name "$CLUSTER" ;; esac What each line does
#!/usr/bin/env bash
Shebang. Asks the OS to execute this file with the firstbashfound in yourPATH(portable across systems).set -euo pipefail
Enables strict mode:
-
-e: exit immediately if any command exits with non‑zero status. -
-u: error if using an unset variable (catch typos/assumptions). -
-o pipefail: a pipeline fails if any command fails (not just the last).
CLUSTER=${1:-k90}
Positional argument \$1 is the cluster name; if omitted, default tok90.
Examples:./cluster.sh⇒ namek90;./cluster.sh demo⇒ namedemo.case "${2:-up}" in
Dispatches on the action provided in \$2; defaults toupif not given.
Usage examples:
-
./cluster.sh→up(default) -
./cluster.sh demo→upon clusterdemo -
./cluster.sh demo down→downon clusterdemo
-
up)block
-
kind create cluster --name "$CLUSTER"creates a Docker‑backed Kubernetes cluster namedCLUSTER. -
kubectl config set-context --current --namespace=devsets the default namespace for the current kubeconfig context todev, so you don’t need-n devfor every command.
-
down)block
-
kind delete cluster --name "$CLUSTER"removes the cluster and its Docker containers cleanly.
-
;;andesac;;ends each case arm;esaccloses thecasestatement.
Why this script is useful
- Idempotent lifecycle: One command to bring the cluster up or tear it down.
- Safer defaults: Strict mode prevents partial/hidden failures.
- Less typing: Sets your working namespace so
kubectl get pods“just works.” - Portable:
envshebang finds the rightbash; easily adapted for zsh.
Common tweaks you might add
- Wait for node readiness:
kubectl wait --for=condition=Ready nodes --all --timeout=120s - Install an ingress addon (minikube):
minikube addons enable ingress - Switch context safely (guard rails):
kubectl config current-context | grep -q kind- || { echo "Refusing to run outside a kind context" >&2; exit 1; } - Parameterize namespace:
NS=${NS:-dev} kubectl config set-context --current --namespace="$NS" macOS / Linux / Windows notes
- macOS: If
/bin/bashis 3.2, run withzsh(#!/bin/zsh) orbrew install bashand use/usr/local/bin/bash(or/opt/homebrew/bin/bashon Apple Silicon). - Linux: Ensure your user can access Docker without
sudo(usermod -aG docker $USER, then re‑login). - Windows: Run the script from WSL2 (Ubuntu) for the best experience with kind/minikube; ensure Docker Desktop has the WSL2 backend enabled.
Quick usage recap
# Bring up default cluster (k90) and set namespace dev ./cluster.sh # Bring up a named cluster ./cluster.sh demo up # Tear it down ./cluster.sh demo down You’re ready. Save this page, keep the YAML/Helm snippets handy, and start iterating.
Wrap-up & Next Steps
You now have:
- A cluster you can spin up/down quickly.
- A clean Deployment + Service in raw YAML.
- A minimal Helm chart with values and releases.
- A mental model to reason about controllers and desired state.
Post 1 (coming up):
- Ingress vs. NodePort vs. LoadBalancer with local ingress addons.
- Rolling updates & health probes (readiness/liveness/startup).
- Config & Secrets (env vars, mounted files, externalized values).
- Resource requests/limits & HPA basics.
- Helm strategies: values layering, env directories,
helmfilepreview.


Top comments (0)