Skip to content
Prev Previous commit
Next Next commit
feat(k8s): use sfs
  • Loading branch information
bene2k1 committed Oct 22, 2025
commit 4c7c9b20f56a3a1f7abddbc78b12436fd1b0b5b5
187 changes: 101 additions & 86 deletions pages/kubernetes/how-to/use-sfs-with-kubernetes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,15 @@ title: How to use SFS with Kubernetes Kapsule
description: This page explains how to use the Scaleway File Storage Container Storage Interface (CSI) driver to enable Kubernetes users to manage Scaleway File Storage volumes within their clusters.
tags: kubernetes kubernetes-kapsule kapsule sfs
dates:
validation: 2025-01-13
posted: 2023-12-27
validation: 2025-07-18
posted: 2025-07-18
categories:
- containers
- kubernetes
---
import Requirements from '@macros/iam/requirements.mdx'

The Scaleway File Storage Container Storage Interface (CSI) driver enables Kubernetes users to manage Scaleway File Storage volumes within their clusters.
The Scaleway File Storage Container Storage Interface (CSI) driver enables Kubernetes users to manage Scaleway File Storage volumes within their clusters.
The Scaleway File Storage CSI driver is designed to work with Kubernetes Kapsule and Kosmos clusters, providing a standardized interface to create, manage, and attach file storage volumes to your containerized workloads. For more details on Scaleway File Storage, refer to the [Scaleway File Storage documentation](https://www.scaleway.com/en/file-storage/).

## Supported features
Expand All @@ -28,8 +28,8 @@ The Scaleway File Storage CSI driver supports the following features:
- A Scaleway account logged into the [console](https://console.scaleway.com)
- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization
- [Created](/kubernetes/how-to/create-cluster/) a Kubernetes Kapsule cluster
- Helm installed for deploying the CSI driver.
- Access to the Scaleway File Storage API.
- Helm installed for deploying the CSI driver
- Access to the Scaleway File Storage API

## Installation

Expand All @@ -39,9 +39,10 @@ The Scaleway File Storage CSI driver can be installed using Helm. Follow these s
```bash
helm repo add scaleway https://helm.scw.cloud/
helm repo update
```
```

2. Deploy the Scaleway File Storage CSI Driver:

2. Deploy the Scaleway File Storage CSI Driver. Use the Helm chart to install the driver, configuring it with your Scaleway credentials and default zone:
```bash
helm upgrade --install scaleway-filestorage-csi --namespace kube-system scaleway/scaleway-filestorage-csi \
--set controller.scaleway.env.SCW_DEFAULT_ZONE=fr-par-1 \
Expand All @@ -52,7 +53,8 @@ The Scaleway File Storage CSI driver can be installed using Helm. Follow these s

Replace `<your-project-id>`, `<your-access-key>`, and `<your-secret-key>` with your Scaleway credentials.

3. Check that the CSI driver pods are running in the `kube-system` namespace:
3. Check that the CSI driver pods are running:

```bash
kubectl get pods -n kube-system -l app=scaleway-filestorage-csi
```
Expand All @@ -61,100 +63,105 @@ The Scaleway File Storage CSI driver can be installed using Helm. Follow these s

## Using the Scaleway File Storage CSI Driver

The Scaleway File Storage CSI driver supports dynamic provisioning of Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) in Kubernetes. Below are examples of how to use the driver to create and manage file storage volumes.
The CSI driver supports dynamic provisioning of Persistent Volumes (PVs) and Persistent Volume Claims (PVCs).

### Creating a Persistent Volume Claim (PVC)

This example demonstrates how to create a PVC to dynamically provision a Scaleway File Storage volume and use it in a pod.

1. Create a file named `pvc.yaml` with the following content:
1. Create a file named `pvc.yaml`:

```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-file-pvc
name: my-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: scaleway-file-default
storage: 100Gi
storageClassName: scw-fs
```

Apply the PVC:
Apply it:

```bash
kubectl apply -f pvc.yaml
```

2. Create a file named `pod.yaml` to define a pod that uses the PVC:
2. Create a file named `pod.yaml`:

```yaml
apiVersion: v1
kind: Pod
metadata:
name: my-file-app
name: my-app
spec:
containers:
- name: busybox
image: busybox
command: ["sleep", "3600"]
volumeMounts:
- mountPath: "/data"
name: file-volume
- name: my-busybox
image: busybox
volumeMounts:
- name: my-volume
mountPath: "/data"
command: ["/bin/sh", "-c"]
args: ["tail -f /dev/null"]
volumes:
- name: file-volume
persistentVolumeClaim:
claimName: my-file-pvc
- name: my-volume
persistentVolumeClaim:
claimName: my-pvc
```

Apply the pod configuration:

```bash
kubectl apply -f pod.yaml
```

3. Check that the pod is running and the volume is mounted:
3. Verify the mount:

```bash
kubectl get pods
kubectl exec -it my-file-app -- df -h /data
kubectl exec -it my-app -- df -h /data
```

The output should show the mounted Scaleway File Storage volume at `/data`.

### Importing an existing File Storage volume

You can import an existing Scaleway File Storage volume into Kubernetes using the Scaleway File Storage CSI driver. This is useful for integrating pre-existing storage volumes into your cluster.

1. Create a file named `pv-import.yaml` to define a PV that references an existing Scaleway File Storage volume:
1. Create a `pv-import.yaml` file:

```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-imported-pv
name: test-pv
spec:
capacity:
storage: 10Gi
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: scaleway-file-default
storageClassName: scw-fs
csi:
driver: filestorage.csi.scaleway.com
volumeHandle: fr-par-1/<volume-id>
volumeAttributes:
zone: fr-par-1
volumeHandle: fr-par/11111111-1111-1111-111111111111
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.filestorage.csi.scaleway.com/region
operator: In
values:
- fr-par
```

Replace `<volume-id>` with the ID of the existing Scaleway File Storage volume.
Replace `volumeHandle` with the actual ID of your existing volume.

Apply:

Apply the PV:
```bash
kubectl apply -f pv-import.yaml
```

2. Create a file named `pvc-import.yaml` to define a PVC that binds to the imported PV:
2. Create `pvc-import.yaml`:

```yaml
apiVersion: v1
Expand All @@ -166,95 +173,103 @@ You can import an existing Scaleway File Storage volume into Kubernetes using th
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: scaleway-file-default
volumeName: my-imported-pv
storage: 100Gi
storageClassName: scw-fs
volumeName: test-pv
```

Apply the PVC:
Apply:

```bash
kubectl apply -f pvc-import.yaml
```

3. Create a pod that uses the imported PVC, similar to the previous example:
3. Create the pod (reuse the example from earlier with `claimName: my-imported-pvc`):

```yaml
apiVersion: v1
kind: Pod
metadata:
name: my-imported-file-app
name: my-imported-app
spec:
containers:
- name: busybox
image: busybox
command: ["sleep", "3600"]
volumeMounts:
- mountPath: "/data"
name: file-volume
- name: my-busybox
image: busybox
volumeMounts:
- name: my-volume
mountPath: "/data"
command: ["/bin/sh", "-c"]
args: ["tail -f /dev/null"]
volumes:
- name: file-volume
persistentVolumeClaim:
claimName: my-imported-pvc
- name: my-volume
persistentVolumeClaim:
claimName: my-imported-pvc
```

Apply the pod configuration:
Apply:

```bash
kubectl apply -f pod.yaml
```

4. Verify that the pod is running and the imported volume is mounted:
4. Verify:

```bash
kubectl get pods
kubectl exec -it my-imported-file-app -- ls /data
kubectl exec -it my-imported-app -- ls /data
```

The output should list the contents of the imported Scaleway File Storage volume.

### Using a custom storage class

You can customize the storage class to define specific parameters for Scaleway File Storage volumes, such as the file system type.

1. Create a file named `storageclass.yaml` with the following content:
1. Create `storageclass.yaml`:

```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: my-file-storage-class
name: my-default-storage-class
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: filestorage.csi.scaleway.com
reclaimPolicy: Delete
parameters:
csi.storage.k8s.io/fstype: ext4
```

Apply the storage class:
Apply:

```bash
kubectl apply -f storageclass.yaml
```

2. Modify the PVC from the first example to use the custom storage class:
2. Update the PVC to use this storage class:

```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-custom-file-pvc
name: my-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: my-file-storage-class
```

Apply the PVC:
```bash
kubectl apply -f pvc.yaml
storage: 100Gi
storageClassName: my-default-storage-class
```

3. Check that the PVC is created with the custom storage class:
```bash
kubectl get pvc
kubectl describe pvc my-custom-file-pvc
```
### Specifying the region

To specify a region explicitly for volume creation:

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: my-ams-storage-class
provisioner: filestorage.csi.scaleway.com
reclaimPolicy: Delete
allowedTopologies:
- matchLabelExpressions:
- key: topology.filestorage.csi.scaleway.com/region
values:
- nl-ams
```