Announcement

Collapse
No announcement yet.

BTRFS RAID5 on Ubuntu Server NAS drives - 3x 2TB WD RED HDDs

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    BTRFS RAID5 on Ubuntu Server NAS drives - 3x 2TB WD RED HDDs

    I've been reading a lot about btrfs. The problem is it's difficult to find current information. A patch to fix the raid5 "hole" was implemented in Linux 4.1 and since I'm using 4.4, I assume I should be able to use btrfs raid5.

    One issue I'm having is understanding the terms: metadata (mkfs.btrfs -m) vs. data (mkfs.btrfs -d).

    I know I'm using the correct disk devices:

    Code:
    mark@Ubuntu-Server:~$ sudo btrfs filesystem show
    Label: none  uuid: 3e3ba67b-65cc-4956-bded-500c89323570
         Total devices 3 FS bytes used 112.00KiB
         devid    1 size 1.82TiB used 2.00GiB path /dev/sda
         devid    2 size 1.82TiB used 1.01GiB path /dev/sdb
         devid    3 size 1.82TiB used 2.01GiB path /dev/sdc
    Here's the command I really thought would work: sudo mkfs.btrfs -f raid5 /dev/sda /dev/sdb /dev/sdc Results:
    Code:
    btrfs-progs v4.4
    
    Failed to check size for 'raid5': No such file or directory
    The last command I issued apparently separated metadata from data across the pool in some way:

    Code:
    mark@Ubuntu-Server:~$ sudo mkfs.btrfs -f -m raid1 -d raid5 /dev/sda /dev/sdb /dev/sdc
    btrfs-progs v4.4
    See http://btrfs.wiki.kernel.org for more information.
    
    Label:              (null)
    UUID:               3e3ba67b-65cc-4956-bded-500c89323570
    Node size:          16384
    Sector size:        4096
    Filesystem size:    5.46TiB
    Block group profiles:
    Data:             RAID5             2.01GiB
    Metadata:         RAID1             1.01GiB
    System:           RAID1            12.00MiB
    SSD detected:       no
    Incompat features:  extref, raid56, skinny-metadata
    Number of devices:  3
    Devices:
    ID        SIZE  PATH
     1     1.82TiB  /dev/sda
     2     1.82TiB  /dev/sdb
     3     1.82TiB  /dev/sdc
    So it looks like I setup the metadata as raid1 and the data as raid5? I'm not happy about the "Incompat features: raid56." I'm also not sure how my 6TB is configured, in fact I can't even see anywhere close to 6TB of NAS storage. I see 3.12 Gigs, not (1.82TiB x 3 = 5.46TiB).

    Please advise.
    "If you're in a room with another person who sees the world exactly as you do, one of you is redundant." Dr. Steven Covey, The 7-Habits of Highly Effective People

    #2
    Answering part of my own post. the btrfs does not show the full pool storage because to do that, as I understand, it has to allocate the space. If it allocates the space it's not able to change the size by the user issuing an 'increase' or 'decrease' command. I assume since I want to use the entire pool for data (including the metadata I assume), I could increase the size to the maximum possible.

    in btrfs speak, metadata is the information used to rebuild lost or missing data in the pool as would happen if a drive fails. What I don't understand is if all the metadata is on a single physical drive, it seems that if that drive failed it would be worse than a data with parts of the metadata and data combined.

    I want to create an entire pool as raid5 with the metadata spread across the pool like the data will be. I tried the command:

    Code:
    sudo mkfs.btrfs -M raid5 /dev/sda /dev/sdb /dev/sdc
    The -M switch is for 'mixed.' According to the man page, -M will mix the data and metadata chunks together for more efficient space utilitzation. This feature incurs a performance penalty in larger filesystems. It is recommended for use with filesystems of 1GB and smaller. Okay, so I won't use the -M switch.
    "If you're in a room with another person who sees the world exactly as you do, one of you is redundant." Dr. Steven Covey, The 7-Habits of Highly Effective People

    Comment


      #3
      The RAID5 configuration for Btrfs is not recommended at this time. Just TWO DAYS ago a patch was submitted to the kernel mailing list to fix the RAID5 "write hole" that crashes a Btrfs RAID5 system. The patch is for kernel 4.12. It also adds fixes to the utility btrfs-progs:
      Two btrfs-progs patches are required to play with this patch set, oneis to enhance 'btrfs device add' to add a disk as raid5/6 log with the
      option '-L', the other is to teach 'btrfs-show-super' to show
      %journal_tail.

      This is currently based on 4.12-rc3.

      You haven't said what kernel you are running but I doubt it is 4.12-rc3 (release candidate 3).
      Running RAID1 across sda and sdb with sdc used as a snapshot backup may be a better configuration at this time.
      Code:
      [FONT=monospace][COLOR=#000000]$ sudo btrfs filesystem usage /  [/COLOR]
      Overall:
          Device size:                   1.36TiB[/FONT][FONT=monospace]
          Device allocated:            216.06GiB
          Device unallocated:            1.15TiB
          Device missing:                  0.00B
          Used:                        205.56GiB
          Free (estimated):            591.08GiB      (min: 591.08GiB)
          Data ratio:                       2.00
          Metadata ratio:                   2.00
          Global reserve:              186.05MiB      (used: 0.00B)
      
      Data,RAID1: Size:106.00GiB, Used:101.80GiB
         /dev/sda1     106.00GiB
         /dev/sdc      106.00GiB
      
      Metadata,RAID1: Size:2.00GiB, Used:1006.31MiB
         /dev/sda1       2.00GiB
         /dev/sdc        2.00GiB
      
      System,RAID1: Size:32.00MiB, Used:16.00KiB
         /dev/sda1      32.00MiB
         /dev/sdc       32.00MiB
      
      Unallocated:
         /dev/sda1     583.15GiB
         /dev/sdc      590.61GiB
      
      [/FONT]
      [#]BTRFS[/#]
      Last edited by GreyGeek; Sep 22, 2017, 11:52 AM.
      "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
      – John F. Kennedy, February 26, 1962.

      Comment


        #4
        Well I totally screwed that up. I thought the numbers referred to the BTRFS version number and since i see:

        Code:
        btrfs-progs v4.4
        See http://btrfs.wiki.kernel.org for more information.
        I didn't think the patch applied to me as it was updated 3 versions ago (4.4 - 4.1).

        Okay then am I correct that if I start using the array as raid1, when the upgrade comes out for raid5, I will be able to dynamically change the array with a btrfs command that will reconfigure for raid5 without losing the data on the array?
        "If you're in a room with another person who sees the world exactly as you do, one of you is redundant." Dr. Steven Covey, The 7-Habits of Highly Effective People

        Comment


          #5
          AFTER the patch trickles down to Kubuntu you can convert between RAID configurations using the balance command. Something like this:

          mount /dev/sda /mnt (or sda1 if you created a partition)
          That assumes you used sda and sdb as your two RAID1 HDs. It will make both sda and sdb mounted. Mounting any drive in a btrfs pool mounts all of them.

          btrfs device add /dev/sdc /mnt

          This will add the third drive to the pool. You may want to backup anything you have on it to your RAID1 setup first.

          btrfs balance start -dconvert=raid5 -mconvert=raid5 /mnt


          This will make both data and metadata convert to RAID5.

          Source: https://btrfs.wiki.kernel.org/index....ltiple_Devices
          Last edited by GreyGeek; Aug 06, 2017, 02:06 PM.
          "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
          – John F. Kennedy, February 26, 1962.

          Comment


            #6
            Outstanding! Good answer. BTW, I figured out how to mount my btrfs array using its UUID in /etc/fstab. It works well. I mounted it to a directory I created /media/nas.

            I'm about 5 minutes behind your answers. I'm jumping back to my thread to get the server online. I hope you answered that one <grin>.
            "If you're in a room with another person who sees the world exactly as you do, one of you is redundant." Dr. Steven Covey, The 7-Habits of Highly Effective People

            Comment


              #7
              Originally posted by mhumm2 View Post
              Outstanding! Good answer. BTW, I figured out how to mount my btrfs array using its UUID in /etc/fstab. It works well. I mounted it to a directory I created /media/nas.

              I'm about 5 minutes behind your answers. I'm jumping back to my thread to get the server online. I hope you answered that one <grin>.
              UUID's is THE preferred method to access storage devices, which is why fstab uses them. You'll notice that my RAID1 system uses sda1 and sdc. Initially they were sda and sdb. When I added a 3rd HD by replacing my CDROM with an HD Caddy that HD became sdb and what was sdb became sdc. However, the UUID's were correct and never changed. Always use UUID's.
              "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
              – John F. Kennedy, February 26, 1962.

              Comment


                #8
                I'd recommend doing some reading before diving in to btrfs at the server level and loading up all your data. I find Arch maintains a very well done and up-to-date btrfs wiki.

                Metadata is the info the filesystem uses to find the files it has stored and Data is the actual file information. You are correct is assuming that having metadata on a single drive is a bad idea - if you lose that drive you lose everything. However, that is not the default setup when the btrfs filesystem is created. The default is, when on multiple devices (regardless of RAID level or non-raid) metadata is mirrored on all devices. From the wiki:
                The RAID levels can be configured separately for data and metadata using the -d and -m options respectively. By default the data is striped (raid0) and the metadata is mirrored (raid1). See Using Btrfs with Multiple Devices for more information about how to create a Btrfs RAID volume as well as the manpage for mkfs.btrfs.
                Here's my desktop RAID0 primary filesystem:
                Code:
                stuart@office:/subvol$ bt fi df /
                Data, RAID0: total=308.00GiB, used=286.62GiB
                System, RAID1: total=32.00MiB, used=36.00KiB
                Metadata, RAID1: total=4.00GiB, used=2.77GiB
                GlobalReserve, single: total=512.00MiB, used=0.00B
                Notice RAID0 for data and RAID1 for metadata.

                As GG pointed out - one huge plus is you can reconfigure RAID levels on the fly, but I'd stay away from RAID5/6 for daily use at this point.

                Couple of points:
                If you use the whole drive rather than a partition (i.e /dev/sda instead of /dev/sda1), you can't install GRUB or boot to it. This is not a problem as long as you have a boot device available.
                If you want to use GRUB and also use GPT formatting instead of MBR, you need to make a small partition for GRUB to use.
                If you are using RAID1 (mirroring), you will want your device or partitions to be similar sizes or you will not be able to access all the space on the larger device.

                Pending the ability to use RAID5/6, how you should configure your drives now would depend on the size and desired usage. For example, are backups kept off-line (off the NAS) or on?

                Please Read Me

                Comment

                Working...
                X