So you Start 獨服 Proxmox VE 配置 RAID 10

語言: CN / TW / HK

之前購買的 So you Start(OVH 旗下品牌) 的獨服,配置有 4 塊 2T 的硬碟,但是 So you Start 後臺預設的 RAID 級別是 RAID1,這樣使得可用的空間只有 8T 中的 2T,25% 的使用率,雖然硬碟安全性級別比較高(允許多塊硬碟損壞的情況下依然不丟資料),但是空間可用率太低了,所以折中一下可以使用 RAID-10(允許一塊硬碟損壞而不丟失資料),這裡就記錄一下如何把 So you Start 的獨服從 RAID-1 級別線上調整成 RAID-10。正常情況下 OVH 旗下的主機品牌,包括 OHV,So you Start, Kimsufi 都可以適用本教程,其他獨服的操作也類似。

前提知識

  • mdadm, fdisk 等基礎命令的使用
  • 對 RAID 級別有基礎的瞭解
  • 瞭解 Linux 下分割槽

幾個主要的步驟

  • 首先使用 So you Start 後臺的系統安裝工具,使用預設的 RAID1 安裝 Debian Buster
  • 線上調整 RAID1 到 RAID10
  • 在 Debian 基礎之上安裝 [[Proxmox VE]]

    mdadm /dev/md1 –fail /dev/sdc1 mdadm /dev/md1 –remove /dev/sdc1 wipefs -a /dev/sdc1 mdadm –grow /dev/md1 –raid-devices=2

first think about a partitioning scheme. usually there is no need to absolutely put everything on a single large partition. proxmox for instance puts disk images and whatnot into /var/lib/vz which then is an ideal mount point for a split partition.

Install Debian

首先在 So you Start 管理面板中使用 Reinstall 重新安裝系統。

  • 使用 Custom 安裝
  • 在下一步分割槽中,使用 RAID1 安裝系統,可以根據自己的需要調整分割槽大小,如果怕麻煩可以,可以把所有空間劃分給 / 然後留一定空間給 swap 。比如我的機器是 32G 的,可以給 16G swap,然後剩餘的空間都劃給 / 。如果熟悉 Linux 的分割槽,並且想要自己定義剩下的空間給 RAID-x,或 ZFS,或 LVM,可以劃分一個比如 2G 給 /boot 分割槽,然後劃分240G 給 / 然後 16G 給 swap ,之後可以把 / 從 RAID1 調整為 RAID10

安裝完成進入系統:

[email protected]:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description:    Debian GNU/Linux 10 (buster)
Release:        10
Codename:       buster

Reshape RAID

重新調整 RAID 級別。需要特別感謝 LET 上面的 Falzo 根據他所提供的詳細步驟我才完成了 RAID1 到 RAID10 的線上調整。

大致的步驟需要先將 RAID1 調整為 RAID0,然後在調整為 RAID10.

首先來檢視一下預設的 RAID 資訊:

[email protected]:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sda2[0] sdc2[1] sdd2[3] sdb2[2]
      511868928 blocks super 1.2 [4/4] [UUUU]
      bitmap: 2/4 pages [8KB], 65536KB chunk

unused devices: <none>

可以看到有一個 md2 RAID,使用了 raid1,有四個分割槽分別是 sda2, sdc2, sdd2, sdb2 組合而成。

檢視硬碟資訊(模糊掉敏感的一些標識資訊):

[email protected]:~# fdisk -l
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: HGST HUS7-----AL
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: B411C4C1-EA13-42F1-86D8-DC-------115

Device          Start        End    Sectors   Size Type
/dev/sdb1        2048    1048575    1046528   511M EFI System
/dev/sdb2     1048576 1025048575 1024000000 488.3G Linux RAID
/dev/sdb3  1025048576 1058603007   33554432    16G Linux filesystem


Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: HGST HUS7-----AL
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: DA108B72-B409-4F9E-8FF1-0D---------8

Device          Start        End    Sectors   Size Type
/dev/sdc1        2048    1048575    1046528   511M EFI System
/dev/sdc2     1048576 1025048575 1024000000 488.3G Linux RAID
/dev/sdc3  1025048576 1058603007   33554432    16G Linux filesystem


Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: HGST HUS-----0AL
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: DC27A340-79CB-437E-952F-97A-------A8

Device          Start        End    Sectors   Size Type
/dev/sdd1        2048    1048575    1046528   511M EFI System
/dev/sdd2     1048576 1025048575 1024000000 488.3G Linux RAID
/dev/sdd3  1025048576 1058603007   33554432    16G Linux filesystem


Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: HGST HU------0AL
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 76C633FE-ACC3-40FA-A111-2C--------C8

Device          Start        End    Sectors   Size Type
/dev/sda1        2048    1048575    1046528   511M EFI System
/dev/sda2     1048576 1025048575 1024000000 488.3G Linux RAID
/dev/sda3  1025048576 1058603007   33554432    16G Linux filesystem
/dev/sda4  3907025072 3907029134       4063     2M Linux filesystem


Disk /dev/md2: 488.2 GiB, 524153782272 bytes, 1023737856 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

然後可以通過 mdadm 命令 reshape RAID1,這一步可以直接線上執行,完全不需要 [[IPMI]] 等等額外的工具。

線上將 RAID1 轉變成 RAID10 的步驟可以參考這篇 文章 作者寫的非常清楚。[[Converting RAID1 to RAID10 online]]

具體的步驟可以檢視如下:

[email protected]:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sda2[0] sdc2[1] sdd2[3] sdb2[2]
      511868928 blocks super 1.2 [4/4] [UUUU]
      bitmap: 2/4 pages [8KB], 65536KB chunk

unused devices: <none>
[email protected]:~# mdadm /dev/md2 --fail /dev/sdc2
mdadm: set /dev/sdc2 faulty in /dev/md2
[email protected]:~# mdadm /dev/md2 --remove /dev/sdc2
mdadm: hot removed /dev/sdc2 from /dev/md2
[email protected]:~# wipefs -a /dev/sdc2
/dev/sdc2: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9
[email protected]:~# mdadm /dev/md2 --fail /dev/sdd2
mdadm: set /dev/sdd2 faulty in /dev/md2
[email protected]:~# mdadm /dev/md2 --remove /dev/sdd2
mdadm: hot removed /dev/sdd2 from /dev/md2
[email protected]:~# wipefs -a /dev/sdd2
/dev/sdd2: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9
[email protected]:~# mdadm --grow /dev/md2 --raid-devices=2
raid_disks for /dev/md2 set to 2
[email protected]:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sda2[0] sdb2[2]
      511868928 blocks super 1.2 [2/2] [UU]
      bitmap: 3/4 pages [12KB], 65536KB chunk

unused devices: <none>
[email protected]:~# mdadm --detail /dev/md2
/dev/md2:
           Version : 1.2
     Creation Time : Thu Oct 21 12:58:06 2021
        Raid Level : raid1
        Array Size : 511868928 (488.16 GiB 524.15 GB)
     Used Dev Size : 511868928 (488.16 GiB 524.15 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Thu Oct 21 13:33:45 2021
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : md2
              UUID : 0686b64f:07957a70:4e937aa2:23716f6e
            Events : 158

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       2       8       18        1      active sync   /dev/sdb2
[email protected]:~# sudo mdadm --grow /dev/md2 --level=0 --backup-file=/home/backup-md2
mdadm: level of /dev/md2 changed to raid0
[email protected]:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid0 sdb2[2]
      511868928 blocks super 1.2 64k chunks

unused devices: <none>
[email protected]:~# mdadm --misc --detail /dev/md2
/dev/md2:
           Version : 1.2
     Creation Time : Thu Oct 21 12:58:06 2021
        Raid Level : raid0
        Array Size : 511868928 (488.16 GiB 524.15 GB)
      Raid Devices : 1
     Total Devices : 1
       Persistence : Superblock is persistent

       Update Time : Thu Oct 21 13:40:10 2021
             State : clean
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 64K

Consistency Policy : none

              Name : md2
              UUID : 0686b64f:07957a70:4e937aa2:23716f6e
            Events : 163

    Number   Major   Minor   RaidDevice State
       2       8       18        0      active sync   /dev/sdb2
[email protected]:~# mdadm --grow /dev/md2 --level=10 --backup-file=/home/backup-md2 --raid-devices=4 --add /dev/sda2 /dev/sdc2 /dev/sdd2
mdadm: level of /dev/md2 changed to raid10
mdadm: added /dev/sda2
mdadm: added /dev/sdc2
mdadm: added /dev/sdd2
raid_disks for /dev/md2 set to 5
[email protected]:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid10 sdd2[5] sdc2[4](S) sda2[3](S) sdb2[2]
      511868928 blocks super 1.2 2 near-copies [2/1] [U_]
      [>....................]  recovery =  0.5% (2835392/511868928) finish=50.8min speed=166787K/sec

unused devices: <none>
[email protected]:~# mdadm --misc --detail /dev/md2
/dev/md2:
           Version : 1.2
     Creation Time : Thu Oct 21 12:58:06 2021
        Raid Level : raid10
        Array Size : 511868928 (488.16 GiB 524.15 GB)
     Used Dev Size : 511868928 (488.16 GiB 524.15 GB)
      Raid Devices : 2
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Oct 21 13:42:49 2021
             State : active, degraded, recovering
    Active Devices : 1
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 3

            Layout : near=2
        Chunk Size : 64K

Consistency Policy : resync

    Rebuild Status : 1% complete

              Name : md2
              UUID : 0686b64f:07957a70:4e937aa2:23716f6e
            Events : 221

    Number   Major   Minor   RaidDevice State
       2       8       18        0      active sync set-A   /dev/sdb2
       5       8       50        1      spare rebuilding   /dev/sdd2

       3       8        2        -      spare   /dev/sda2
       4       8       34        -      spare   /dev/sdc2
[email protected]:~# mdadm --misc --detail /dev/md2
/dev/md2:
           Version : 1.2
     Creation Time : Thu Oct 21 12:58:06 2021
        Raid Level : raid10
        Array Size : 511868928 (488.16 GiB 524.15 GB)
     Used Dev Size : 511868928 (488.16 GiB 524.15 GB)
      Raid Devices : 2
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Oct 21 13:47:58 2021
             State : active, degraded, recovering
    Active Devices : 1
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 3

            Layout : near=2
        Chunk Size : 64K

Consistency Policy : resync

    Rebuild Status : 11% complete

              Name : md2
              UUID : 0686b64f:07957a70:4e937aa2:23716f6e
            Events : 554

    Number   Major   Minor   RaidDevice State
       2       8       18        0      active sync set-A   /dev/sdb2
       5       8       50        1      spare rebuilding   /dev/sdd2

       3       8        2        -      spare   /dev/sda2
       4       8       34        -      spare   /dev/sdc2
[email protected]:~# mdadm --misc --detail /dev/md2
/dev/md2:
           Version : 1.2
     Creation Time : Thu Oct 21 12:58:06 2021
        Raid Level : raid10
        Array Size : 511868928 (488.16 GiB 524.15 GB)
     Used Dev Size : 511868928 (488.16 GiB 524.15 GB)
      Raid Devices : 2
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Oct 21 13:48:29 2021
             State : clean, degraded, recovering
    Active Devices : 1
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 3

            Layout : near=2
        Chunk Size : 64K

Consistency Policy : resync

    Rebuild Status : 12% complete

              Name : md2
              UUID : 0686b64f:07957a70:4e937aa2:23716f6e
            Events : 588

    Number   Major   Minor   RaidDevice State
       2       8       18        0      active sync set-A   /dev/sdb2
       5       8       50        1      spare rebuilding   /dev/sdd2

       3       8        2        -      spare   /dev/sda2
       4       8       34        -      spare   /dev/sdc2
[email protected]:~# mdadm --grow /dev/md2 --raid-devices=4
[email protected]:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid10 sdd2[5] sdc2[4] sda2[3] sdb2[2]
      511868928 blocks super 1.2 64K chunks 2 near-copies [4/3] [U_UU]
      [>....................]  reshape =  0.2% (1387520/511868928) finish=67.4min speed=126138K/sec

unused devices: <none>
[email protected]:~# mdadm --misc --detail /dev/md2
/dev/md2:
           Version : 1.2
     Creation Time : Thu Oct 21 12:58:06 2021
        Raid Level : raid10
        Array Size : 511868928 (488.16 GiB 524.15 GB)
     Used Dev Size : 511868928 (488.16 GiB 524.15 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Oct 21 13:50:47 2021
             State : clean, degraded, reshaping
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : near=2
        Chunk Size : 64K

Consistency Policy : resync

    Reshape Status : 1% complete
     Delta Devices : 2, (2->4)

              Name : md2
              UUID : 0686b64f:07957a70:4e937aa2:23716f6e
            Events : 725

    Number   Major   Minor   RaidDevice State
       2       8       18        0      active sync set-A   /dev/sdb2
       5       8       50        1      spare rebuilding   /dev/sdd2
       4       8       34        2      active sync set-A   /dev/sdc2
       3       8        2        3      active sync set-B   /dev/sda2
[email protected]:~# mdadm --misc --detail /dev/md2
/dev/md2:
           Version : 1.2
     Creation Time : Thu Oct 21 12:58:06 2021
        Raid Level : raid10
        Array Size : 511868928 (488.16 GiB 524.15 GB)
     Used Dev Size : 511868928 (488.16 GiB 524.15 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Oct 21 13:51:59 2021
             State : active, degraded, reshaping
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : near=2
        Chunk Size : 64K

Consistency Policy : resync

    Reshape Status : 3% complete
     Delta Devices : 2, (2->4)

              Name : md2
              UUID : 0686b64f:07957a70:4e937aa2:23716f6e
            Events : 769

    Number   Major   Minor   RaidDevice State
       2       8       18        0      active sync set-A   /dev/sdb2
       5       8       50        1      spare rebuilding   /dev/sdd2
       4       8       34        2      active sync set-A   /dev/sdc2
       3       8        2        3      active sync set-B   /dev/sda2
[email protected]:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid10 sdd2[5] sdc2[4] sda2[3] sdb2[2]
      511868928 blocks super 1.2 64K chunks 2 near-copies [4/3] [U_UU]
      [====>................]  reshape = 21.8% (111798784/511868928) finish=59.6min speed=111736K/sec

unused devices: <none>
[email protected]:~# mdadm --misc --detail /dev/md2
/dev/md2:
           Version : 1.2
     Creation Time : Thu Oct 21 12:58:06 2021
        Raid Level : raid10
        Array Size : 511868928 (488.16 GiB 524.15 GB)
     Used Dev Size : 511868928 (488.16 GiB 524.15 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Oct 21 14:05:44 2021
             State : active, degraded, reshaping
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : near=2
        Chunk Size : 64K

Consistency Policy : resync

    Reshape Status : 22% complete
     Delta Devices : 2, (2->4)

              Name : md2
              UUID : 0686b64f:07957a70:4e937aa2:23716f6e
            Events : 1345

    Number   Major   Minor   RaidDevice State
       2       8       18        0      active sync set-A   /dev/sdb2
       5       8       50        1      spare rebuilding   /dev/sdd2
       4       8       34        2      active sync set-A   /dev/sdc2
       3       8        2        3      active sync set-B   /dev/sda2
[email protected]:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev             16G     0   16G   0% /dev
tmpfs           3.2G  8.9M  3.2G   1% /run
/dev/md2        481G  1.5G  455G   1% /
tmpfs            16G     0   16G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            16G     0   16G   0% /sys/fs/cgroup
/dev/sdd1       511M  3.3M  508M   1% /boot/efi
tmpfs           3.2G     0  3.2G   0% /run/user/1000
[email protected]:~# lsblk
NAME    MAJ:MIN RM   SIZE RO TYPE   MOUNTPOINT
sda       8:0    0   1.8T  0 disk
├─sda1    8:1    0   511M  0 part
├─sda2    8:2    0 488.3G  0 part
│ └─md2   9:2    0 488.2G  0 raid10 /
├─sda3    8:3    0    16G  0 part   [SWAP]
└─sda4    8:4    0     2M  0 part
sdb       8:16   0   1.8T  0 disk
├─sdb1    8:17   0   511M  0 part
├─sdb2    8:18   0 488.3G  0 part
│ └─md2   9:2    0 488.2G  0 raid10 /
└─sdb3    8:19   0    16G  0 part   [SWAP]
sdc       8:32   0   1.8T  0 disk
├─sdc1    8:33   0   511M  0 part
├─sdc2    8:34   0 488.3G  0 part
│ └─md2   9:2    0 488.2G  0 raid10 /
└─sdc3    8:35   0    16G  0 part   [SWAP]
sdd       8:48   0   1.8T  0 disk
├─sdd1    8:49   0   511M  0 part   /boot/efi
├─sdd2    8:50   0 488.3G  0 part
│ └─md2   9:2    0 488.2G  0 raid10 /
└─sdd3    8:51   0    16G  0 part   [SWAP]
[email protected]:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid10 sdd2[5] sdc2[4] sda2[3] sdb2[2]
      511868928 blocks super 1.2 64K chunks 2 near-copies [4/3] [U_UU]
      [======>..............]  reshape = 32.9% (168472448/511868928) finish=49.0min speed=116718K/sec

unused devices: <none>

等待很長一段時間之後 RAID10 完成:

[email protected]:~# mdadm --misc --detail /dev/md2 
/dev/md2:
           Version : 1.2
     Creation Time : Thu Oct 21 12:58:06 2021
        Raid Level : raid10
        Array Size : 1023737856 (976.31 GiB 1048.31 GB)
     Used Dev Size : 511868928 (488.16 GiB 524.15 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Fri Oct 22 01:39:27 2021
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 64K

Consistency Policy : resync

              Name : md2
              UUID : 0686b64f:07957a70:4e937aa2:23716f6e
            Events : 6536

    Number   Major   Minor   RaidDevice State
       2       8       18        0      active sync set-A   /dev/sdb2
       5       8       50        1      active sync set-B   /dev/sdd2
       4       8       34        2      active sync set-A   /dev/sdc2
       3       8        2        3      active sync set-B   /dev/sda2

Install Proxmox VE on Debian

完成 RAID10 的調整之後,如果磁碟還有剩餘的空間,可以再分割槽,之後使用 ZFS,raidz 可以自己選擇。

然後可以更具官方的 教程 ,直接在 Debian 的基礎之上安裝 Proxmox VE。之後需要移除掉 cloud-init 否則網路配置會產生問題。

reference

  • [[mdadm-command]]