Kubernetes Header Image

Fixing “Kubernetes configuration file is group-readable or world-readable” warnings

The Issue

When using kubectl or oc you may see warnings that your Kubernetes configuration file is readable by group or by everyone.

WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/user/cluster/admin-kubeconfig WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/user/cluster/admin-kubeconfig 

The Cause

The kubeconfig file has permissions that allow access for group or others. The tools expect your kubeconfig to be readable and writable only by your user.

You can confirm this with a long listing. If you see read permission for group or others, the file is too open.

ls -l /home/user/cluster/admin-kubeconfig -rw-r--r-- 1 user staff 12345 Sep 3 14:05 /home/user/cluster/admin-kubeconfig # ^ group and others have read access 

The Fix

  1. Restrict the file permissions so only your user can read and write it.
    chmod 600 /home/user/cluster/admin-kubeconfig
  2. Optionally restrict the directory that holds the file.
    chmod 700 /home/user/cluster
  3. Verify the new permissions. The output should show owner read and write only.
    ls -l /home/user/cluster/admin-kubeconfig -rw------- 1 user staff 12345 Sep 3 14:05 /home/user/cluster/admin-kubeconfig 
  4. Consider moving the kubeconfig into your home configuration folder for easier use, then point your tools at it.
    mkdir -p ~/.kube mv /home/user/cluster/admin-kubeconfig ~/.kube/admin-kubeconfig export KUBECONFIG=~/.kube/admin-kubeconfig 

    If you work with several kubeconfigs, you can join them in an environment variable.

    export KUBECONFIG=~/.kube/admin-kubeconfig:~/.kube/other.kubeconfig
  5. Keep your kubeconfig private. Do not share it, and do not commit it to a source control system.

Regards


Bluesky Icon
Follow me on Bluesky

Dean Lewis

Zoom header

Zoom Error 5456000 – An unknown error occurred – Unable to Connect

The Issue

I’ve been plagued with this issue on one single macos device for a while and I’ve not been sure as to why, I’ve tried uninstalling and reinstalling to no avail, and google wasn’t much help.

Ultimately the Zoom app itself couldn’t connect to meetings.

An unknown error occurred Error code: 5456000

Zoom an unknown error occurred 5456000

The Cause

I’ve no idea what the exact cause is, however I realised the app had zero network connectivity when it couldn’t even check for updates, even after uninstall and reinstall.

The Fix

Don’t uninstall by moving the app to the trash bin, instead uninstall it this way:

Applications > Zoom > Right Click - Show Package Contents > Click into the Contents Folder > Click into Frameworks > Run the ZoomUninstaller file

Once completed, I’d say it’s a good idea to do a full restart of your macbook, and then reinstall from the latest package downloaded on zoom.us.

How to fix Zoom Error 5456000 Unable to Connect

 

Regards


Bluesky Icon
Follow me on Bluesky

Dean Lewis

veducate header

Safely Clean Up Orphaned First Class Disks (FCDs) in VMware vSphere with PowerCLI

vSphere Orphaned First Class Disk (FCD) Cleanup Script

Orphaned First Class Disks (FCDs) in VMware vSphere environments are a surprisingly common and frustrating issue. These are virtual disks that exist on datastores but are no longer associated with any virtual machine or Kubernetes persistent volume (via CNS). They typically occur due to:

  • Unexpected VM deletions without proper disk clean-up
  • Kubernetes CSI driver misfires, especially during crash loops or failed PVC deletes
  • vCenter restarts or failovers during CNS volume provisioning or deletion
  • Manual admin operations gone slightly sideways!

Left unchecked, orphaned FCDs can consume significant storage space, cause inventory clutter, and confuse both admins and automation pipelines that expect everything to be nice and tidy.

🛠️ What does this script do?

Inspired by William Lam’s original blog post on FCD cleanup, this script takes the concept further with modern PowerCLI best practices.

You can download and use the latest version of the script from my GitHub repo:
👉 https://github.com/saintdle/PowerCLI/blob/saintdle-patch-1/Cleanup%20standalone%20FCD

Here’s what it does:

  1. Checks if you’re already connected to vCenter; if not, prompts you to connect
  2. Retrieves all existing First Class Disks (FCDs) using Get-VDisk
  3. Retrieves all Kubernetes-managed volumes using Get-CnsVolume
  4. Excludes any FCDs still managed by Kubernetes (CNS)
  5. For each remaining “orphaned” FCD, checks if it is mounted to any VM (even if Kubernetes doesn’t know about it)
  6. Generates a report (CSV + logs) of any true orphaned FCDs (not in CNS + not attached to any VM)
  7. If dry-run mode is OFF, safely removes the orphaned FCDs from the datastore

The script is intentionally designed for safety first, with dry-run mode ON by default. You must explicitly allow deletions with -DryRun:$false and optionally -AutoDelete.

❗ Known limitations and gotchas

Despite our best efforts, there is one notorious problem child: the dreaded locked or “current state” error.

You may still see errors like:

The operation is not allowed in the current state.

This happens when vSphere believes something (an ESXi host, a failed task, or the VASA provider) has an active reference to the FCD. These “ghost locks” can only be diagnosed and resolved by:

  • Using ESXi shell commands like vmkfstools -D to trace lock owners
  • Rebooting an ESXi host holding the lock
  • Engaging VMware GSS to clear internal stale references (sometimes the only safe option)

This script does not attempt to forcibly unlock or clean these disks for obvious reasons. You really don’t want a script going full cowboy on locked production disks. 😅

So while the script works great for true orphaned disks, ghost FCDs are a special case and remain an exercise for the reader (or your VMware TAM and GSS support team!).

⚠️ Before you copy/paste this blindly…

Let me be brutally honest: this script is just some random code stitched together by me, a PowerCLI enthusiast with far too much time on my hands, and enhanced by ChatGPT. It’s never been properly tested in a production environment.

 

Regards


Bluesky Icon
Follow me on Bluesky

Dean Lewis

Learn Kubevirt - migrating from VMware - header image

Learn KubeVirt: Deep Dive for VMware vSphere Admins

As a vSphere administrator, you’ve built your career on understanding infrastructure at a granular level, datastores, DRS clusters, vSwitches, and HA configurations. You’re used to managing VMs at scale. Now, you’re hearing about KubeVirt, and while it promises Kubernetes-native VM orchestration, it comes with a caveat: Kubernetes fluency is required. This post is designed to bridge that gap, not only explaining what KubeVirt is, but mapping its architecture, operations, and concepts directly to vSphere terminology and experience. By the end, you’ll have a mental model of KubeVirt that relates to your existing knowledge.

What is KubeVirt?

KubeVirt is a Kubernetes extension that allows you to run traditional virtual machines inside a Kubernetes cluster using the same orchestration primitives you use for containers. Under the hood, it leverages KVM (Kernel-based Virtual Machine) and QEMU to run the VMs (more on that futher down).

Kubernetes doesn’t replace the hypervisor, it orchestrates it. Think of Kubernetes as the vCenter equivalent here: managing the control plane, networking, scheduling, and storage interfaces for the VMs, with KubeVirt as a plugin that adds VM resource types to this environment.

Tip: KubeVirt is under active development; always check latest docs for feature support.

Core Building Blocks of KubeVirt, Mapped to vSphere

KubeVirt Concept vSphere Equivalent Description
VirtualMachine (CRD) VM Object in vCenter The declarative spec for a VM in YAML. It defines the template, lifecycle behaviour, and metadata.
VirtualMachineInstance (VMI) Running VM Instance The live instance of a VM, created and managed by Kubernetes. Comparable to a powered-on VM object.
virt-launcher ESXi Host Process A pod wrapper for the VM process. Runs QEMU in a container on the node.
PersistentVolumeClaim (PVC) VMFS Datastore + VMDK Used to back VM disks. For live migration, either ReadWriteMany PVCs or RAW block-mode volumes are required, depending on the storage backend.
Multus + CNI vSwitch, Port Groups, NSX Provides networking to VMs. Multus enables multiple network interfaces. CNIs map to port groups.
Kubernetes Scheduler DRS Schedules pods (including VMIs) across nodes. Lacks fine-tuned VM-aware resource balancing unless extended.
Live Migration API vMotion Live migration of VMIs between nodes with zero downtime. Requires shared storage and certain flags.
Namespaces vApp / Folder + Permissions Isolation boundaries for VMs, including RBAC policies.

KVM + QEMU: The Hypervisor Stack

Continue reading Learn KubeVirt: Deep Dive for VMware vSphere Admins

Kubernetes Header Image

Kubernetes Finalizers: Deep Dive into PVC Deletion

This article builds upon an old post I wrote many years ago: 
Kubernetes PVC stuck in Terminating state
. That post covered the symptoms and quick fixes.

This one is for platform engineers and Kubernetes operators who want to understand why resources like PVCs get stuck in Terminating, how Kubernetes handles deletion internally, and what it really means when a finalizer hangs around.

What Are Finalizers and Why Do They Matter?

In Kubernetes, deleting a resource is a two-phase operation. When a user runs
kubectl delete, the object is not immediately removed from etcd. Instead, Kubernetes sets a deletionTimestamp and, if finalizers are present, waits for them to be cleared before actually removing the resource from the API server.

Finalizers are strings listed in the metadata.finalizers array. Each one signals that a
controller must perform cleanup logic before the object can be deleted. This ensures consistency and is critical when external resources (cloud volumes, DNS records, firewall rules) are involved.

metadata: finalizers: - example.com/cleanup-hook 

Until this list is empty, Kubernetes will not fully delete the object. This behavior is central to the garbage collection process and the reliability of resource teardown.

Deletion Flow Internals

Here’s what actually happens under the hood: Continue reading Kubernetes Finalizers: Deep Dive into PVC Deletion