2

I want to setup a Kubernetes Cluster, for testing and later for prod purpose. but I am stuck with the concept of VMs running in Kubernetes.

First I want to mention that I don't have a shared storage, I use (and for testing, used Ceph on Proxmox) Ceph as HCI Solution, and it worked like a charm. Usually I would stick to that solution but Kubernetes just has too many advantages which I want to benefit from.

The Config that I have in mind: Workers = Server with a boot Drive for Kubernetes. a couple of additional drives and create a Ceph cluster over the worker nodes. Does it make sense?

For me it does. Why: compared to images which can be downloaded a VM is static (installed OS and app and run a service). BTW: the VMs are Windows. From my understanding the VM can not, without a shared storage or HCI Ceph, move to another Worker. Therefore in case of failure, the VM is down. Is that correct so far?

The goal is: its 6 VMs which only need to talk to each other and 1 to connect to external world.

Maybe somebody have some advice for such scenario. I can only find some links but with external storage.

2
  • You can already run 6 VMs with PVE and CEPH, you just enable HA and VMs will be restarted on another node if first one fails, notice that for the sake of PVE HA, CEPH is considered a shared storage. They even had demonstration videos with that setup (three computers, CEPH, HA) on their YouTube channel. What's the question? You want something else? Commented Oct 10 at 13:26
  • 1
    But in general, know that Kubernetes is not the only HA-capable cluster manager out there. It's very popular, but it's very specialized, tailored to just one kind of architecture (microservice) and one type of managed services (containers). There are much more generic cluster managers, like open-source Pacemaker, commercial solutions like Veritas Cluster Server, and more. Commented Oct 10 at 13:36

1 Answer 1

0

Are the worker instances stateless and replaceable? For example if they just respond to requests over IP, or save their outputs to some shared object storage. The kind of nodes you can scale more of without much effort, and do not need something like a kubernetes StatefulSet.

If stateless, then keeping the same nodes is not required. If a VM fails, remove it from service such as from a load balancer, create a new 7th instance from a base image, add the new one into service, and continue on. Shared storage is not required, could be entirely be on local disk arrays.

However, likely instances exist where you do need to save state. Database instances and other things. What is the recovery point objective (RPO), how much data can you afford to lose while bringing it up after an unplanned event?

If the last few minutes can be lost, Proxmox HA can bring up a VM on a different host using ZFS asynchronous storage replication. (Forum threads state this has been a feature for a few years now.) Storage on each is independent, only linked over IP when ZFS sends deltas.

If the RPO is tighter, recover all data written to disk, shared storage becomes more compelling. Rather than a replicated copy of the disks, these are the same disks. You do not have a FC storage array with LUNs presented to all physical hosts at once, but you do have distributed storage to provide the same disk blobs to all hosts. Either way, this would be shared storage and Proxmox HA could make sense.

Kubernetes comes from a higher layer of defining and resourcing and scheduling applications to run. You might prefer deploying applications via k8s, a declarative API is included, rather than finding some other automation tool. At first the runnable atom was a container, although KubeVirt allows running VMs as Kubernetes objects.

These building blocks of software can be combined in several ways, according to your preference.

If you are using Proxmox anyway, their distro integrates ceph. Storage services on the same nodes running VMs.

If Kubernetes and VMs intrigues you, find something to run KubeVirt. Harvester is a package deal HCI distro featuring KubeVirt, although the storage used is Longhorn not Ceph.

Or you could integrate KubeVirt + Ceph yourself.

Or you could run Kubernetes on Proxmox VMs. Although you wanted to run apps in VMs, and I don't see how KubeVirt managing VM lifecycle is compatible with Proxmox also doing that. So maybe not.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.