Skip to content

Conversation

@Jessica41
Copy link
Collaborator

What type of Pull Request is this?

  • Update of existing guide(s)
  • Fix
  • Optimization

Description

Updating the guide to include the EFI partition

Mandatory information

  • This Pull Request shouldn't be merged before: Do not have a date.

  • This Pull Request content should be replicated for the US OVHcloud documentation : YES

@Jessica41 Jessica41 marked this pull request as draft August 5, 2025 13:09
@Jessica41 Jessica41 added Do not merge yet This Pull Request is awaiting a GO from product teams or has to be merged on a defined date. FIX The Pull Request contains fixes of code or content (typos) labels Aug 5, 2025
@Jessica41 Jessica41 requested a review from sbraz August 18, 2025 15:36
Copy link
Member

@sbraz sbraz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for your work. That's going to be really helpful for customers!
I left a bunch of comments. Sometimes I only commented on one of the two versions but they apply to both.


The default RAID level for OVHcloud server installations is RAID 1, which doubles the space taken up by your data, effectively halving the useable disk space.

**This guide explains how to manage and rebuild software RAID after a disk replacement on your server in BIOS mode**
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
**This guide explains how to manage and rebuild software RAID after a disk replacement on your server in BIOS mode**
**This guide explains how to manage and rebuild software RAID after a disk replacement on your server in legacy boot (BIOS) mode**

Internally, we refer to the boot mode as UEFI or legacy, not BIOS. I think BIOS could be misleading as the UEFI setup menu is still sometimes called "BIOS" or "BIOS setup".


For **GPT** partitions, line 6 will display: `Disklabel type: gpt`.

For **MBR** partitions, line 6 will display: `Disklabel type: dos`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would not document MBR at all. We've entirely abandoned it for new installations. Only 3% of the installations done in 2024 were in this case and starting from July 2025 it's 0%.


For **MBR** partitions, line 6 will display: `Disklabel type: dos`.

Still going by the results of `fdisk -l`, we can see that `/dev/md2` consists of 888.8GB and `/dev/md4` contains 973.5GB. If we were to run the mount command we can also find out the layout of the disk.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should mention that it's only the case when booted to disk, not in rescue.
Also, mount's output is pretty verbose. I'd just keep lsblk's

To check whether a server runs on BIOS mode or BIOS mode, run the following command:

```sh
[user@server_ip ~]# [ -d /sys/firmware/efi ] && echo BIOS || echo BIOS
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That should be moved at the beginning of the doc when you mention legacy/UEFI.

Alternatively, the `lsblk` command offers a different view of the partitions:

```sh
lsblk
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why no prompt here? Some other commands have [user@server_ip ~]#

swap : ignored
```

We enable the swap partition:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
We enable the swap partition:
We activate the swap partition:
swap : ignored
```

We enable the swap partition:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
We enable the swap partition:
We activate the swap partition:
swapon /dev/nvme1n1p4
```

We exit the Chroot environment with `Exit` and unmount all the disks:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
We exit the Chroot environment with `Exit` and unmount all the disks:
We exit the chroot environment with `Exit` and unmount all the disks:
swapon /dev/sdb4
```

We exit the Chroot environment with `exit` and unmount all the disks:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
We exit the Chroot environment with `exit` and unmount all the disks:
We exit the chroot environment with `exit` and unmount all the disks:
mount /dev/nvme0n1p1 new
```

Next, we copy the files from the `old` folder to the `new` folder. Depending on your operating system, you will have a similar output. Here we are using debian:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see in the next subsection where you perform the changes from the OS itself and not the rescue, you run the sync script. Surely it'd work in rescue too, did you try?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sbraz I did perform all these steps in rescue mode, I just forgot to add the prompt for rescue mode. Doing that in the next update. Thanks!

@Jessica41 Jessica41 added the Guide creation The Pull Request contains at least 1 new guide (meta.yaml and index edition needed) label Sep 5, 2025
Copy link
Contributor

@SlimJ4 SlimJ4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Empty bullet point in Case Study 2
  • Potential consistency issue in "Rebuilding the RAID in normal mode"
**Case study 2** - There have been major system updates (e.g GRUB) to the OS and the ESPs have been synchronised:

- The server is able to boot in normal mode because all the ESPs contain up-to-date information and the RAID rebuild can be carried out in normal mode.
-
Copy link
Contributor

@SlimJ4 SlimJ4 Oct 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either remove the empty bullet point or add a "The server is unable to boot in normal mode..." line in Case Study 2


If your server is able to boot in normal mode after a disk replacement, you can proceed with the following steps to rebuild the RAID:

In our example, we replaced the disk **nvme1n1**.
Copy link
Contributor

@SlimJ4 SlimJ4 Oct 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the previous examples (Disk fail simulation and rescue mode RAID rebuild), we failed the disk nvme0n1, then replaced it and synced it with nvme1n1. They seem to be reversed in the normal mode RAID rebuild example, as this line states "we replaced the disk nvme1n1". This inversion is consistent across the rest of the section.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Do not merge yet This Pull Request is awaiting a GO from product teams or has to be merged on a defined date. FIX The Pull Request contains fixes of code or content (typos) Guide creation The Pull Request contains at least 1 new guide (meta.yaml and index edition needed)

4 participants