Announcement

Collapse
No announcement yet.

New to Linux - RAID installation

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #16
    Re: New to Linux - RAID installation

    Originally posted by oshunluvr
    Originally posted by ViperLancer
    Oh yes, another question that I'm sure is explained somewhere but I haven't yet found it...

    For the Linux partitions, you need /boot, /root, /swap, /home, any others? What sizes should /boot and /root be? /swap should equal how much RAM you have, and /home should be how much space you want to give to the user directories right? Does the /home store all user data, so like 95% of your HDD should go to this?

    Any links to or pointers of what I should search for to find a good explanation of this?
    Thanks.
    Actually those directories you refer to need not be actual partitions. They MAY be if you wish - linux doesn't care. Swap space is normally a separate partition by default, but can also be a file (like windows) if you wish or you may not have one at all - linux doesn't care.

    The recommended basic and usually default setup is:

    one partition for swap equal to your ram size - roughly the same as windows paging file, having a dedicated partition for it provides better performance.
    one partition for / - this holds your OS and all software
    one partition for /home - this holds your user(s) settings and data, roughly equal to windows "Document and Settings" directory.

    I can't stress enough how flexible everything is. In fact - with all you've got going on on your hard drives, and since you're kind of testing or just starting out with linux - you should probably keep everything in one partition for simplicities sake for now.
    Wait, I think I partially remember (could be wrong) what my old Linux mentor used to tell me on partition sizes.

    /boot = 100MB
    / = 10GB
    /root = 1GB
    /swap = RAM size (at least 2GB)
    /home = remaining space

    Does that sound right?

    Originally posted by oshunluvr
    My setup?
    Four RAID0 OS devices (daily user, testing and backup), four more non-raid partitions for other OS's, four swap partitions (they function together like a RAID0), four /boot partitions for each RAID0 OS device (/boot can't be inside a RAID0), two dedicated partitions for GRUB-PC in RAID1 (primary and backup boot manager - this is not the normal setup), a single RAID5 device for storing my data files on, a RAID0 device for /tmp, a separate file server for movies, music, pics, whatever, and finally a btrfs device spread over four drives holding backups.

    My desktop partition/device sizes (4x400gb drives) are roughly 16gb for each OS both RAID and non-RAID, 200gb for my personal files, 32gb for tmp, 4x2gb for swap, backup space is 1.5tb, /boot and GRUB partitions are 400mb (so they hardly count). The server has 1.7tb for media and .3tb for backing up my personal files. The server OS takes about 1.5gb and servers two printers, scanner, voip, ktorrent and a couple of other things along with the files.

    I have a media computer in the TV room, my power desktop unit, the server, and the families and neighbors laptops using various flavors of windows and linux and they all use the printers and access the media files and can backup to the server.

    Not much, right?
    So I'm still confused on something here. You created a partition for this GRUB, and it can be on a RAID device (RAID1 makes sense, good idea!), but /boot can't be on a RAID partition at all. So you have 4 OSes on your PC, do you need a /boot partition for each? Or just one to store the records for each OS installed, but if so why do you have 4?
    So one thing about me is I like symmetry, so I like your setup. If I CAN install Linux in some of my unpartitioned space on my System drive, should I create a /boot for each Linux OS I want to install? I'd REALLY like to keep my partitions symmetrical...
    &quot;Were you killed?&quot;<br />&quot;Sadly, yes... But I LIVED!&quot;<br /><br />Antec Two Hundred case<br />Gigabyte GA-EP45-UD3P<br />Intel Core2 Duo E8400<br />4GB Kingston DDR2-1066<br />2x320GB WD Caviar (RAID0, OSes)<br />2x1TB Hitachi Deskstar (RAID0, Programs/Data)<br />1x2TB Hitachi Deskstar (Backup)<br />Gigabyte ATI Radeon HD5770<br />5x120mm fans

    Comment


      #17
      Re: New to Linux - RAID installation

      re MBR: Yes exactly, if you install GRUB to any drive other than your windows boot drive, your windows MBR will be intact. You don't have to do it that way - but I think it's like an added backup boot partition. I usually install GRUB to at least two drives just in case. Your MBR is only 512b and that includes your partition table.

      re LVM: Example: I have a server that 20 clients use. One day, the partition they all share files on (an LVM partition) has grown full and my clients need more space. Do I take the server down, backup their data, format a new larger drive, restore the data? No, I plug in the new drive and with a couple of keystrokes add it to the LVM and voila - they have more space. Same server - drive #2 is starting to throw errors. Do I backup, replace, restore? No - I add a replacement drive to the LVM, have the LVM move the data off the failing drive to elsewhere in the LVM, and when it's empty remove the dying drive from the LVM and finally from the computer. All this occurs with out taking the server off-line except to install or remove the physical devices.

      re partition sizes: Your old linux mentor is basically correct. Current belief is somewhat expanded I suppose because hard drive space is so cheap now. Having swap equal to your ram size is necessary for laptops (suspend to ram) but for a desktop your needs could be less, but as I said - when you're have TB's wasting a few MB's ain't gonna kill ya.

      Modern linux software has grown in size a bit. I think 10gb for all your OS and /tmp and a large number of programs might be a bit thin. It would be plenty if you're not installing a ton of graphical stuff. As I said my server was about 2-3gb's last time I checked and that's with KDE.

      I can't think of a reason why you would need /root on it's own partition.

      /boot however is commonly separate and must be if you're using softRAID. /boot holds your kernel image and initial ram disk that allows booting. If you're using softRAID (other than 1 which I'll get to in a moment) you need /boot to be outside your softRAID. 100mb will work but /boot also holds GRUB in /boot/grub unless you take the extra effort to create a separate GRUB partition, so you'll have to keep your kernel list down to 2 or (which isn't hard) or make /boot at least 200mb. I wouldn't recommend a separate grub partition for a newbie.

      Re /boot and GRUB-PC being RAID1: As you know RAID1 (mirroring) basically is keeping multiple copies of the same data. Using linux software RAID1 on two partitions leaves you with two separate copies of the same data and they are both accessible individually if you don't load the RAID device. Confused?

      Drives, partitions, and created softRAID arrays are addressed as "devices" in linux. So let's assume two drives 120gb each for this example. The primary, extended, logical definitions are the same for windows and linux with a few small differences:
      1. Linux can boot from primary or logical anywhere on the disk.
      2. Linux can access all primary partitions on a disk all the time, windows only one.

      Linux hard disk device addressing uses a combination of interface, type of device, lettering for the drive, and numbering for the partition.

      In our two drive example, our drives are /dev/sda and /dev/sdb s=scsi or sata d=drive a=first drive b=second drive and so on. The older IDE interface was "h" (I don't know why, hard disk?) but most newer distros have set all hard drives and cdrom's as "s" interface to keep things simpler.

      Partitions are numbered. Numbers 1-4 are reserved for primary/extended partitions. Logical partitions begin at 5. I believe there may be an upper limit (64?) but I've never gone above 14 so I'm not sure.

      So back to our example, the first primary partition is 1 (duh) and we have swap there, then we have 100mb for /boot, the rest goes into extended, then we create three logical partitions. So our device list looks like:

      /dev/sda1 = 2gb swap
      /dev/sda2 = 200mb /boot
      /dev/sda5 = 6gb
      /dev/sda6 = 100gb
      /dev/sda7 = 12gb

      Drive "b" is the same. Now we want to install our system to RAID0 using linux software RAID. We create our first RAID device by combining /dev/sda5 and /dev/sdb5, giving us a nice 12gb RAID0 device for our install. What is the device name for the RAID device? Whatever we want! Convention is to use "md" because the program that creates the RAID is named "mdadm". I like to keep things logical and we used /dev/sda5 and /dev/sdb5 so I would name it "md5" so then the device name becomes /dev/md5.

      So now I install my linux OS - I use both swaps /dev/sda1 and sdb1 and assign them equal priority (this allows linux to treat them like a RAID device automatically), /dev/sda2 as /boot and /dev/md5 as /. So what do I do with the rest of my drive space? I want a second OS to play around with and a backup of my main OS so I install OS #2 to /dev/sda7.

      Here's where RAID1 comes in: Since RAID0 is basically unrecoverable with a drive failure, I want to duplicate my /dev/md5 to /dev/sdb7. So I create a RAID1 on /dev/sda2 and /dev/sdb2 this copies /boot so it's on both drives, and I create a RAID1 on /dev/md5 and /dev/sdb7! - This creates a copy of my OS. My new RAID1 devices are /dev/md2 for /boot and / is now /dev/md57 (5 and 7). A few edits to my grub entries and I'm booting up. I also create a grub entries to boot my backup OS without RAID1 and boot to my other OS on /dev/sda7. All changes I make on the RAID1 devices are duplicated so I don't need to worry about backup maintenance.

      So far we now have:
      /dev/sda1 + sdb1 = 2x2gb swap
      /dev/md2 = 200mb /boot with member devices sda2 and sdb2
      /dev/md57 = 12gb with members md5 (who's members are sda5 and sdb5) and sdb7
      /dev/sda6 + sda7 = 2x100gb still unused
      /dev/sda7 = 12gb - secondary OS for playing with

      What to do with the 2x100gb partitions? I need space for my /home. RAID5 needs at least 3 devices so I can't do that. The choices are RAID1 for total data security, RAID0 for no security but total access to all the space and a speed bump, LVM for total space use and maximum flexibility for expansion later on.

      I'll stop there - but you can see there are literally dozens of variations of RAID on top of RAID and/or LVM.

      RAID1 devices - if you don't load mdadm or stop the device - are accessible individually stand alone. So if one device fails, you can access /dev/sda2 rather than /dev/md2. If your head isn't already spinning try out the idea of 4 drives!

      Obviously this is complicated and really depends on how mission-critical your system is. Most users have no need for all this - but it's fun to play with!


      Please Read Me

      Comment


        #18
        Re: New to Linux - RAID installation

        Huh, I never knew the MBR & partition table was only 512-bytes. So if I install GRUB to any OTHER drive I'm safe, what about if I install it to my Windows drive? If done correctly it won't be an issue, but I could also mess up my Windows? But if I install GRUB to a different HDD, it can be a backup boot partition? Like in the case something happens to my primary boot partition, the BIOS will keep searching HDD's and find the GRUB?

        Okay, that's kinda what I was thinking about LVM, but I think you cleared it up in my head and now I see your point. That could be very useful, though my thought is that in any environment, if you start getting errors on a drive, you're probably already screwed (losing some data) unless it's in a RAID1, right?
        For my old mentor's info, good to know I still mostly remember it, but I could be wrong about /root getting a partition, may be confusing it like you earlier said with root of "/". I could easily be confusing what he told me. So where would /root typically be placed? I really need a good article on the hierarchy of Linux directories/partitions. lol

        I also found out that my motherboard/RAID utility allow me to create 2 RAIDS on each array (that appear to Windows as separate HDD's, not just partitions), so I can create one for my Windows installs that uses half the drive space(298/596GB), with 3 primary partitions (1 for each OS @ 40GB) and a shared extended for data/programs. So if I were to do that and had the second half of my 320GB drives unraided/unpartitioned, could I put the /boot there and softRAID1 it in Linux, or does it have to be at the beginning of the drive? Would GRUB be able to point the MBR to /boot halfway down the disk?

        So I can keep /grub in /boot, use 200MB, and that'll work for multiple OSes? How many OSes could be stored in /boot? If /grub can be done separately, why not create /grub as the primary partition to point to each of 3 Linux OSes in

        I just wiped my Windows install (using work laptop at the moment) due to issues, so I can play around with the partitioning.
        So here's my thought of a disk layout (assuming it'll work):

        2x298GB = 596GB Effective
        Intel RAID0 using 1st 298GB, 3x40GB primary partitions for Windows OSes, 1x178GB for shared data.

        Linux gets remaining 298GB, 3x primary partitions, 1 extended, multiple logical as follows.
        /dev/sda1 = 200MB /boot (primary)
        /dev/sda2 = 4GB /swap (primary)
        /dev/sda5 = 10gb / (extended, why not primary?)
        /dev/sda6 = 20gb /
        /dev/sda7 = Remaining space /home

        Duplicate to second HDD

        /dev/sdb1 = 200MB /boot (primary)
        /dev/sdb2 = 4GB /swap (primary)
        /dev/sdb5 = 10gb / (extended, why not primary?)
        /dev/sdb6 = 20gb /
        /dev/sdb7 = Remaining space /home

        Then comes the RAID setup:

        /dev/md1 = 200MB /boot (sda1/sdb1 mirrored)
        /dev/md5 = 20GB / (sda5/sdb5 striped)
        /dev/md6 = 20GB / (sda6/sdb6 mirrored)
        /dev/md7 = Remaining space /home (sda7/sdb7 striped)

        I may also add-
        /dev/md8 = 20GB / (md5/md6 mirrored)

        Pending whether it'll work treating the RAID0 partition and non-RAID as separate HDD's, if that doesn't work, how do I partition it to allow both Windows and Linux (considering I'd need at least 3x installs of Windows)?

        Does that setup look alright? Yeah, it'll give me 3 copies of the OS, but I'll have my striped set for performance, plus a whole backup on each drive so if either drive fails I'm fine. I may also simply double sda5/sdb5 and not use a backup of the OS on this HDD set. My issue with doing the /dev/md8 is, what's the performance side-effect of using a mirror on a stripe? If /md5 can write at 2x speed, but /md6 can only write at 1x speed, how does that work?
        I'll likely just go without /md8 and do a complete backup of /md5 after I get it working to /md6, so any changes I do make WON'T propagate, since I may very well hose my own OS messing with it.

        If Linux does store its OS files and programs in "/", then 20GB may not be enough in the long run, will it?

        Can Linux/Unix share program installs among themselves or each other? If so, how would I set it up to contain each OS on its own, but share their programs? I wonder if GRUB can boot Unix...

        If /home doesn't have to be its own partition, I might make it a folder directly on my Data01 stripe. Or preferably, blend it with my \Users\ folder from my Windows side on Data01, so I can read my documents from either OS without having to duplicate them. Thoughts?
        So do I have to create /swap as primary partitions? If not, why do you? Also, why would you place /swap as a higher device (sda1) than /boot (sda2)?

        In the end, I would like to run AT LEAST 1 Linux & 1 Unix OS (aside from Windows). Not sure exactly how much they can share...

        Thank you so very much for taking the time to assist me with all this!
        &quot;Were you killed?&quot;<br />&quot;Sadly, yes... But I LIVED!&quot;<br /><br />Antec Two Hundred case<br />Gigabyte GA-EP45-UD3P<br />Intel Core2 Duo E8400<br />4GB Kingston DDR2-1066<br />2x320GB WD Caviar (RAID0, OSes)<br />2x1TB Hitachi Deskstar (RAID0, Programs/Data)<br />1x2TB Hitachi Deskstar (Backup)<br />Gigabyte ATI Radeon HD5770<br />5x120mm fans

        Comment


          #19
          Re: New to Linux - RAID installation

          At first glance your setup plan looks good. I think you'll find you won't need that much space for the install - but you're reserving a ton so there's plenty to go around. 20gb is plenty with 6gb to spare. Linux programs aren't as sloppy as windows are.

          Re linux and it's directories: In the old days - it was common to use multiple drives and partitions to enhance performance. Now drives are huge and way faster so it's more common to leave everything on one partition. I used to spread the larger subdirectories across drive channels to maximize transfer speeds. Now I doubt you'd ever notice the difference - if there is any.

          My beliefs:
          If you're booting to RAID0 or 5 you need /boot separate, otherwise not.

          You should, when possible spread swap across all available drives. Swap is used to replace RAM when you run out. Obviously the very best swap setup is exponentially slower than RAM so the faster you can get it the better. The less likely your are to use swap (more RAM = less swap), the less this matters.

          /tmp is a location that gets frequent reads/writes so it needs to be high performance as possible so should always be on a separate RAID0. /tmp is not mission critical so if it dies, who cares? Additionally, if you are regularly booting various installs /tmp space can be shared among them all without ill effects, thus saving overall space.

          extended vs. primary? I don't know of any reason to pick one over the other. I don't use windows so it totally doesn't matter to me. In theory, you could partition the whole thing extended/logical.

          Sharing subdirectories across different installs is a bad idea. /boot holds the kernel image so allowing one install to overwrite the kernel of another would kill one or both of them. I was referring enough space to hold various kernels from the same install.

          What you can share is /tmp. Some think you can share /home, some not. I say not - too many settings can get wacky. I solve this by leaving /home within each install, but keeping pure data - music, graphics, documents, etc. - in a separate partition that I do share. I call it /files and I link to it from each /home in each install. It's best to keep you data apart so you can wipe your install without worrying about your data. This method works better if your sharing the data files among windows with linux too.

          Basic directory structure:
          /boot : kernel images and boot files
          /root : root user home
          /etc : configuration files
          /bin and /sbin : executables
          /usr : installed software and programs
          /lib and /lib32 : libraries (like windows dll's)
          /dev : device addresses
          /proc : system operating data
          /mnt : directories used for mount locations for other drives/partitions
          /tmp : temporary files
          /home : user settings and files
          /sys : system files
          /var : logs and generated output files
          /opt : alternate install location for programs (rarely used anymore)

          there's a few others, but these are the main players. As a new user I would not recommend you spread out your install. You see no benefit and you'll waste space.

          The best partition to put first on the drive is swap for performance. I always use the first primary partition because I know partition1 is swap on all my drives and partitions. /boot, /, /home are the only other separate partitions you need at this point.

          Please Read Me

          Comment


            #20
            Re: New to Linux - RAID installation

            Okay, so a few problems I've got right now...
            1st, I can't find this GRUB program on the Kubuntu CD when I run it Live, so how/where do I find/use it?
            2nd, from your description, if /boot holds the kernel to boot the OS and I install multiple OSes, do they each have to have their own /boot? Can they share the /grub if it is separate?
            3rd, you say /tmp can be shared, and you can link multiple OSes to your file directory but not to share anything else. What about /swap?

            Yeah, I think the space thing is simply because I know how to install Windows programs in different directories, but don't with Linux, so I want to leave plenty of room to install programs/games/etc. As I said, I've effectively got 596GB of HDD space on my primary drives to install 3-6 OSes, depending on how this plays out and if my motherboard does allow me to have my System RAID0 act as 2 HDD's to be partitioned separately.

            Agreed, programs written by Linux people aren't as sloppy as Windows. OpenOffice is one program that proved that to me. 1/3 the install size, and file formats are 1/7 the size

            Okay, so if I understand this correctly...

            /boot = Can be its own partition, not required unless installing to RAID other than 1.
            / = Root directory, stores most other directories (/etc, /bin, /usr, /lib, /dev, /proc, /mnt, /tmp, /sys, /var, /opt if used)
            /home = User directory for settings and files (does this store /root?)

            So a directory structure would/could look like this?

            /dev/sda1/boot
            /dev/sda2/swap
            /dev/sda5/ (root directory)
            /dev/sda5/dev (not sure on this, thinking the 1st /dev is what you meant)
            /dev/sda5/sys
            /dev/sda5/bin
            /dev/sda5/lib
            /dev/sda5/usr
            /dev/sda5/etc
            /dev/sda5/proc
            /dev/sda5/mnt
            /dev/sda5/tmp
            /dev/sda5/var
            /dev/sda6/home
            /dev/sda6/home/root

            I think understanding the hierarchy of it will help me to understand the partitioning aspect as well.

            Yeah, I would like to avoid wasting space in the long run, which is why I want to know what can be shared among OSes. But for now, I think I've got plenty of space to waste (596GB across 3-6 OSes!), so I'm not too worried about it.

            Also, if my theory of my motherboard RAID system allowing me to use the separate RAID partitions it can create as separate drives (Windows seems to obey this, probably not Linux), then I should still be able to create the drive as follows (right?):

            RAID0 (Windows) 3 Primary partitions of 40GB each
            Non-RAID (Linux) Extended partition, with previous list of partitioning (in my last post) being logical drives.

            This way Windows would have the Primaries it needs, and Linux doesn't care, right?





            &quot;Were you killed?&quot;<br />&quot;Sadly, yes... But I LIVED!&quot;<br /><br />Antec Two Hundred case<br />Gigabyte GA-EP45-UD3P<br />Intel Core2 Duo E8400<br />4GB Kingston DDR2-1066<br />2x320GB WD Caviar (RAID0, OSes)<br />2x1TB Hitachi Deskstar (RAID0, Programs/Data)<br />1x2TB Hitachi Deskstar (Backup)<br />Gigabyte ATI Radeon HD5770<br />5x120mm fans

            Comment


              #21
              Re: New to Linux - RAID installation

              /root is under / not /home, as the directory /root uses different permissions.

              yes to each install needing their own /boot, yes to sharing swap

              re size: if you use separate /temp and /home (or /files) 12gb would be nearly max needed. There's 30,000 packages available in the repos - but how many photo managers and word processors will you need? I have a fully loaded system; 1733 packages, six kernels, without /tmp and /home it's 9gb. I have games galore and four video editors, several dvd rip and burn programs, openoffice with clipart, 4 languages, wine, text-to-speech...you get the idea.

              sharing grub: yes, no and kind-of. Simple answer is yes, but not in the way you share /tmp and swap. I'll try and explain it: boot managers really only use 446 bytes (the rest of the MBR is partition info) and this code points them at a bit of code that lead to the bootstrap. Windows bootmanager points itself at the files "frozen" on your hard drive (used to be msdos.sys and io.sys I think) and starts the boot. grub is smarter but still point itself at some files on the hard drive which is why it needs to be outside of the RAID because softRAID uses drivers that aren't loaded at this point in the boot process.

              How grub is "shared" usually is your at first linux install - you install grub to the MBR and it points itself at the /boot directory of that install. For the subsequent linux installs, you don't need to re-install grub again - you just need to update the grub you already have installed so it "sees" the other installs. BTW grub is usually very good at auto-detecting windows and most linux installs without much user input - occasionally it fails and that's where noob's get frustrated. Also worth noting is installing windows always overwrites the MBR, so install windows first and you shouldn't have issues - grub boots windows just fine.

              The usual way a noob would do it is install windows, leave some hd space for linux, boot the livecd, go through the installer - which installs grub at the end, and they're done. Dual-booting sometimes doesn't work for some but 90% of the time if you install windows first you don't have problems.

              To update grub you would boot to the install from which you created the initial grub install, run "update-grub" from the command line, and reboot.

              So you're sharing in the sense that all your OS's are bootable from one grub. However, updating needs to be done from the original install - which is fine until you wipe or damage that install. This is why I use a totally separate grub partition. My grub MBR points at the special partition and that partition isn't dependent on any particular install so I can always boot to the grub menu even if I trash one of my OS installs. The downside to this approach is auto-updating grub using the update-grub feature can be difficult or impossible. My solution to this is to install grub to each partition that I install to, thus each OS has it's own grub, then I edit my dedicated grub partition manually whenever I change something that requires it. This can be work intensive if you do a lot of OS installing/changing. One final option here is to have each installs' grub config files identical and allow any of them to update your dedicated grub - this too, is a lot of work.

              Another method is to "chainload" which is boot-loader talk for booting from one boot loader to another. Since grub can be installed to partitions and not just the MBR, you can allow each OS to install it's grub to it's own partition and boot your primary grub and then boot to one of the other grub's. Obviously, this method makes your boot take somewhat longer. To minimize delay with this method, you can set each of the "other" grub's to not display a boot menu and have zero delay in the boot (no waiting for input - just boot).

              Told you there were choices! - This can be a bit overwhelming! Many people refer to this as "duel" booting rather than dual booting...lol

              To your grub question: Grub will be installed by your linux install (at the end of the install process) or you can do it manually from a live bootable grub cd or from any live cd that has grub on it. To install, setup, and update grub you use command line tools from a terminal

              http://www.dedoimedo.com/computers/grub-2.html
              https://help.ubuntu.com/community/Grub2

              finally - your latest scheme looks fine. BTW not only does linux not care where it's installed, it doesn't care if you use separate partitions or not - it won't know. This is the argument for keeping most of the subdirectories together - no guessing as to how many MB's to give to each sub.

              I suggest for partitioning you begin by setting up your windows fakeRAID partitions however they need to be done. Linux won't care. Then download and burn a gparted livecd and boot to it. It will allow you to partition and format the remaining space without interfering with the other partitions. Then install and setup your new windows install and when it works like you need it to - then install linux and setup your dual boot.

              One last comment: Last night I was thinking over our discussion. The primary problem with RAID0 is the likelihood of total failure increases mathematically in a direct line with the number of drives used in the array. 1 drive install odds of the drive dying are X, 2 drives doubles failure rate 2X, 3 drives triples 3X, 4 quadruples and so on.

              So I came up with this scheme:

              4 drives
              RAID0 devices 3x5gb each done across three drives like:
              ABC ABD BCD
              then all three overlaid with RAID1. Now if any drive in the array fails, you still have one bootable install. You replace the bad drive, rebuild the RAID devices and you're good to go! This effectively reduces failure odds to way better than a single drive.

              I think it's the same failure rate as 2 two-device RAID0's overlaid with RAID1 i.e.
              AB CD, but you'd get more speed from the RAID with three devices.

              Currently - I use a four-device RAID0 and back it up to two different locations.

              My next computer will use 4 high speed SSD's in RAID and 2-6GBsec sata drives for backup.

              Please Read Me

              Comment


                #22
                Re: New to Linux - RAID installation

                Hey, sorry for my delay in getting back to you! Been busy the last few days, haven't had a chance. Eesh, work and computer problems. Well, internet problems actually. Quite a pain when you work from home!

                Okay, that makes more sense. I think I remember that now about /root being under "/".

                Yeah, that makes perfect sense about each OS needing its own /boot, and being able to share /swap.

                So my next question, if I use a separate /temp, how big should it be? This should use a RAID0, right? But /temp can be shared across Linux OSes, right? Any idea if UNIX would be able to share /temp, /swap, or anything like that with Linux (don't even know if it uses them)?

                So if I use separate /temp and /home or /files, 12GB is what I'd need for "/"? Okay, for the most part I agree, I probably wouldn't install very much in the way of programs that eat up that much space, but there are a few games I play that take 2+ GB (I've seen up to 14GB for just one) of space in Windows. What would I do for those? I install most of my Windows games onto my Data01 RAID0, so it uses its own bandwidth to improve performance (apart from the OS and lesser programs). How would I do that in Linux? Can I mount a folder on my Data01 drive in the root (what Windows considers D:\) to install my programs there? How do I do that (or how do I find documentation on doing so)?

                Okay, that GRUB info is pretty complex for now. I'll have to go with a basic "Install Windows, install Kubuntu" for now. But in the future, I intend to install at least 1 more Windows OS (Prob Win7, maybe WinServer), so that'll wipe my GRUB install in the MBR and make Linux not bootable, right? How do I fix that at that point? Use the Live CD to reinstall GRUB? I'll have to learn how to do that...

                While that argument does make sense about keeping Linux subdirectories together, you lose out on the performance/configurability aspect, like using RAID0 on /swap, /temp, or RAID1 on /boot, etc. So I'll go with it separate, just have to work out getting it done.

                So I've got you thinking so hard you're thinking about this at night, huh? What about that wife you've got a picture of on your profile? I'd only be able to think of my wife at night! Well, thanks and sorry!

                In any case, yeah the odds of losing data rises using RAID0, because any SINGLE failure causes the destruction of ALL data, but that's why I've got a whole backup drive and a script that automatically backs up all my new stuff whenever I run it to said drive, so at any point in time, I could lose the backup drive OR any/all drives in my array, and still have my data. The odds of losing both my backup drive AND any/all drives in my array at the same time are extremely low, especially since my Backup HDD (Data02) functions on a separate bus on the motherboard and my system is on a fairly good surge strip.

                I'll have to think about your RAID scheme. I think your scheme is incorrect slightly, but not positive. If you're using 4 drives, but RAID0 across 3, 2 issues:

                1 - Your scheme is using all 4 (ABC/ABD/BCD, includes ABCD), so how do you overlay that with RAID1? By using a different RAID0 scheme on the same drives (ACD is all that's left)? I'm pretty sure the scheme your looking for would (standard) look like ABC/BCA/CAB, which you could then use drive 4 (d) as a RAID1 to the RAID0, but you'd lose performance since it'd have to write all the same data to your RAID1 as the RAID0, your RAID0 would have to wait on your RAID1. I think...
                2 - Your scheme (if possible) shows using drive B for all 3 parts of the RAID0, but ACD only 2x each, so you'd have a performance drop from drive B. In addition to this, if you combine it with my first point, you wind up not only losing more performance but also nullifying the purpose of overlaying with RAID1. Because if you lose any single drive, you're going to lose all data on both RAIDs.

                Yeah, your current setup sounds great! Performance and safety.

                4x SSD's? Ouch, that's pricey! Great performance, though! 2x 6GB/sec SATA drives? They make 6GB/sec SATA drives now? 6GB/sec for the bus potential you mean, right? Like SATAII's are up to 3GB/sec, but that's just the maximum of the SATAII bus, not each drive.
                &quot;Were you killed?&quot;<br />&quot;Sadly, yes... But I LIVED!&quot;<br /><br />Antec Two Hundred case<br />Gigabyte GA-EP45-UD3P<br />Intel Core2 Duo E8400<br />4GB Kingston DDR2-1066<br />2x320GB WD Caviar (RAID0, OSes)<br />2x1TB Hitachi Deskstar (RAID0, Programs/Data)<br />1x2TB Hitachi Deskstar (Backup)<br />Gigabyte ATI Radeon HD5770<br />5x120mm fans

                Comment


                  #23
                  Re: New to Linux - RAID installation

                  Originally posted by ViperLancer
                  Hey, sorry for my delay in getting back to you! Been busy the last few days, haven't had a chance. Eesh, work and computer problems. Well, internet problems actually. Quite a pain when you work from home!
                  I was on vacation until yesterday anyway!
                  So my next question, if I use a separate /temp, how big should it be? This should use a RAID0, right? But /temp can be shared across Linux OSes, right? Any idea if UNIX would be able to share /temp, /swap, or anything like that with Linux (don't even know if it uses them)?
                  To be technically correct, it's "/tmp" and everything in it (put there by linux) is supposed to be temporary. I share mine across all my os's and haven't had an issue yet. /tmp can also be set to be cleared at shutdown if you want a blank slate every time, but I've never (knocking) had an issue. I use RAID0 for best speed and I don't need file security for tmp files. I haven't used straight UNIX, but I doubt it would be an issue. I have played with FreeBSD and Solaris but I used virtual machines for that. swap space is cleared automatically at boot IF the swap does not match the OS being booted - at least with the distros I boot to.

                  /tmp size will depend on your use. I do a lot of video editing and DVD rips so I use 16gb. Enough for a double layer DVD iso plus 40% or so. I've never come close to filling it so I could get away with less. If you are not doing anything like that, you might start with as small as 4gb. The real advantage for me besides mixing in RAID0 speed is having tons of /tmp for all my installs without as much wasted HD space. In your case - you're starting with a single install so you really don't need it to be separate at this point. You can always easily add it later.
                  So if I use separate /temp and /home or /files, 12GB is what I'd need for "/"? Okay, for the most part I agree, I probably wouldn't install very much in the way of programs that eat up that much space, but there are a few games I play that take 2+ GB (I've seen up to 14GB for just one) of space in Windows. What would I do for those? I install most of my Windows games onto my Data01 RAID0, so it uses its own bandwidth to improve performance (apart from the OS and lesser programs). How would I do that in Linux? Can I mount a folder on my Data01 drive in the root (what Windows considers D:\) to install my programs there? How do I do that (or how do I find documentation on doing so)?
                  I think there are a couple of questions/answers here ;
                  Installing a specific program to a separate partition (regardless of it's format or location) is similar to the windows way but a lot easier to control. With windows, once you've installed a game to a particular "drive", relocating it can be problematic or even impossible without re-installing it. With linux, almost ANY directory can be relocated at anytime (and back) totally transparently to the program itself. You need only create and mount the partition or drive in the proper location and you may do this before or after the program is installed. Let me use a specific example:
                  I have installed GameX to my linux install (lets assume all directories are installed together in a single partition). Because GameX is so large, my / directory is now 95% full. My hard drive is totally in use so I have no room to expand my partition, but I have space on my #2 HD. I need only locate where I installed GameX, move the files to another partition (even one on a network), and "mount" that partition as if it were the original install partition. Programs installed in linux usually default to /usr/something or /opt (rarely anymore) so I go looking and find GameX in /usr/local/games/GameX and indeed there are 10GB of file there. So I format the available space on drive 2, temporarily mount this new partition as /mnt/GameX, move all the files from /usr/local/games/GameX to /mnt/GameX, remount the new partition as /usr/local/games/GameX and add it to /etc/fstab to make it permanent, and I'm done. No reboot or re-install required. This method fails only if a file you want to use is being used at the time you want to move it. Simply shutdown the program or do the moving "offline" by booting to a liveCD.

                  If you knew in advance you would need the space and were able to figure out where GameX would install (file locations are listed in the install package) you could prepare the new partition in advance of the install. Or you could relocate the entire /usr partition to free up even more space. It makes no difference to linux if the directory is on this drive or that one or RAID or LVM or even a RAM drive - as long as it's available when it looks for it. Remember: Linux is all about User Control and Options.

                  You referred to windows games and their size - I haven't come across a linux game or any other program larger than a few GB's, but I do install windows games to my linux platform using "Wine". Since Wine installs windows games to your home directory by default, space in / is not an issue. However, if you have a multiuser system (you don't right?) anything installed this way is not available to the other users unless they install their own copy. You could see how this would eat up a ton of HD space very quickly. The way around this is to configure Wine to use a separate partition for drive space and then install programs there just like you would for windows. Then allow every user access to the Wine partition and viola - you're sharing windows games.

                  Other methods of installing windows software to linux include Crossover and Cedega products - check them out.

                  Okay, that GRUB info is pretty complex for now. I'll have to go with a basic "Install Windows, install Kubuntu" for now. But in the future, I intend to install at least 1 more Windows OS (Prob Win7, maybe WinServer), so that'll wipe my GRUB install in the MBR and make Linux not bootable, right? How do I fix that at that point? Use the Live CD to reinstall GRUB? I'll have to learn how to do that...
                  You are correct it that any subsequent windows install will wipe your grub MBR. It can be restored from the liveCD, boot from a liveCD to your install and do it there, boot to a GRUB liveCD and do it - dozens of other ways. There are literally hundreds of forum posts here and elsewhere and even whole web pages devoted to doing this. My suggestion is to pick a method you're most comfortable with and prepare for it in advance. My method is to backup copy my MBR to a thumb drive and restore it by booting to a liveCD, mounting the thumb drive (or using a live bootable thumb drive - 1GB is plently of space) and restoring it. That way the exact MBR is restored. Remember only the first 446 bytes are MBR, the next 64 bytes are partition info, the last 2 are signature bytes. If you re-partition your drive AFTER you backup your entire MBR, you wouldn't want to overwrite the new partition info. Easily prevented by partitioning, then backing up MBR or simply restoring only the first 446 bytes of the backup. I use a linux tool called "dd" to do this.

                  While that argument does make sense about keeping Linux subdirectories together, you lose out on the performance/configurability aspect, like using RAID0 on /swap, /temp, or RAID1 on /boot, etc. So I'll go with it separate, just have to work out getting it done.
                  I was only thinking about your learning curve - since all the RAID and partition stuff can just as easily be done later.

                  So I've got you thinking so hard you're thinking about this at night, huh? What about that wife you've got a picture of on your profile? I'd only be able to think of my wife at night! Well, thanks and sorry!
                  She can be distracting!

                  I think your scheme is incorrect slightly,
                  Yes, you're right. Later contemplation revealed the need for 4 separate RAID0 duplicates:
                  ABC, ABD, ACD, BCD

                  Laying RAID1 on top of RAID0 is called "nesting". I think you'd lose a very little bit of performance - the system wouldn't "wait" to do all the writes - they would occur in background time when the HD was available. This would be a good weekend testing project to see what the actual results were!

                  To do this (more commonly done with a single pair of RAID0 devices), simply build a RAID1 device out of your RAID0 devices.

                  Example:
                  RAID0
                  ABC = /dev/md1
                  ABD = /dev/md2
                  ACD = /dev/md3
                  BCD = /dev/md4

                  RAID1
                  /dev/md1 + /dev/md2 + /dev/md3 + /dev/md4 = /dev/md5

                  Then you install to /dev/md5 and you've got four copies automatically. If a drive fails, replace the dead drive, rebuild your RAID1 device and you're back in business.

                  4x SSD's? Ouch, that's pricey! Great performance, though! 2x 6GB/sec SATA drives? They make 6GB/sec SATA drives now? 6GB/sec for the bus potential you mean, right? Like SATAII's are up to 3GB/sec, but that's just the maximum of the SATAII bus, not each drive.
                  The newest SATA standard is called "SATA 6Gb/s" rather than SATAIII (who knows why...). The newest mobos have it and USB 3.0 also. I can feel my credit card warming up as I type! Drives are available in SATA (1.5Gb/s), SATAII (3Gb/s) and SATA 6Gb/s now. Obviously they don't really reach those speeds.

                  Here's my hdparm results (system was in use) SATAII drives.

                  no RAID
                  Timing cached reads: 8188 MB in 2.00 seconds = 4096.09 MB/sec
                  Timing buffered disk reads: 382 MB in 3.01 seconds = 127.05 MB/sec

                  RAID5 (four devices)
                  Timing cached reads: 8878 MB in 2.00 seconds = 4442.74 MB/sec
                  Timing buffered disk reads: 696 MB in 3.01 seconds = 231.61 MB/sec

                  RAID0 (four devices)
                  Timing cached reads: 9010 MB in 2.00 seconds = 4508.32 MB/sec
                  Timing buffered disk reads: 1134 MB in 3.00 seconds = 377.97 MB/sec

                  SSD's are getting reachable price-wise and will be more so when I finally get around to a new computer. My last look if you really want speed - you can get PCI-E interfaced SSD's that go up to 1.4Gb/s and SSD's DO perform near their advertised speeds. 512gb will cost you $9500!!!

                  Cost wise I'm now thinking a pair of 240gb PCI-E x4 cards that spec at 540mbs (currently $700 each).

                  A lot of the above discussion will be moot when btrfs is ready for prime time and SSD's are the norm. I give it a year... 8)

                  Please Read Me

                  Comment


                    #24
                    Re: New to Linux - RAID installation

                    Just for giggles I took one of my RAID0 installs and mated it via RAID1 with a non-raid partition of the same total size to see the performance hit. The results were a bit shocking:

                    Non-RAID partition:
                    Timing cached reads: 8900 MB in 2.00 seconds = 4453.65 MB/sec
                    Timing buffered disk reads: 380 MB in 3.01 seconds = 126.16 MB/sec

                    RAID0 partition:
                    Timing cached reads: 8064 MB in 2.00 seconds = 4035.05 MB/sec
                    Timing buffered disk reads: 1242 MB in 3.00 seconds = 413.92 MB/sec

                    RAID1 partition (of both partitions above)
                    Timing cached reads: 7748 MB in 2.00 seconds = 3876.71 MB/sec
                    Timing buffered disk reads: 312 MB in 3.00 seconds = 103.99 MB/sec

                    Please note this is way far away from scientific data - I didn't bother to close anything or stop the RAID1 device before checking the RAID0 speeds and I only did a single test, nor did I test any "real-world" functions i.e. transferring files to and fro...

                    Interesting that the RAID1 device is even slower than the non-RAID partition as a stand alone. Clearly, there is some performance hit when using RAID1 as you suggested. I suppose the real test would be actual performance vs. time spent making backups (rather than using RAID1) vs. real-world need to be running again immediately.

                    If you had a totally automated regular backup that occurred while your system was not in use and it was done in a way that did not require restoring before accessing it - like an drive mirror - that would be better performance wise than RAID1.

                    Please Read Me

                    Comment


                      #25
                      Re: New to Linux - RAID installation

                      That is interesting -- thanks for posting it.

                      The last time I built a Windows system (early 2004), I bought a pair of WD Raptors (72GB) and mated them in a RAID 1 mirror, for data security. That machine has been my wife's for the past 4 years, since I built a Linux box for myself. Earlier this year, I decided 6 years was long enough, and I pulled them and replaced them with a 200GB SATA disk (WD-something, don't remember) and re-installed Win XP for her.

                      But, the experience of observing 6 years of failure-free operation, plus seeing the MTBF specs on the current generation of conventional hard drives, plus the overwhelming scarcity of reports of hard drive failures, leads me to think RAID 1 is pretty much a waste of a hard drive for desktop computer purposes (assuming some reasonable backup scheme is used).

                      Also, isn't it the case that conventional hard drives can't saturate the SATA (1, 2 or 6) channel anyway? In other words, even striping them isn't going to get us where a SSD will be in a year or two, right?

                      Comment


                        #26
                        Re: New to Linux - RAID installation

                        Sorry for the delay in my response! Been busy with life, and sadly still haven't installed Linux on my system - cause I've been working on my boss's laptop, his server, and the 2 PC's I just built for my parents (especially now that my mom broke her Windows).

                        Well, I agree with your thought that RAID1 is mostly a waste, save for circumstances where anything more than 0 downtime is unacceptable. But it also depends on your drives/money/backup situation. I personally like how I run my system:

                        2x 320GB RAID0 - System drives - holds multiple OS's (currently WinXP x64 & Win7 Enterprise x64, will add WinServer2003 Enterprise x64, Kubuntu x64, some version of FreeBSD, etc).

                        2x 1TB RAID0 - Data drives - Independant of OS, accessible to any OS, keeps OS/Data transfers from affecting each other.

                        1x 2TB standalone - Backup drive - I run a script I wrote that backs up what I want from System/Data to this, accessible instantly.


                        Now I've also read that a RAID1 is supposed to keep about the same write speeds as a standalone, but increase read speeds, so I'm wondering if Oshunluvr's test is inaccurate due to him using a RAID1 of a RAID0/standalone...
                        I also do have a question about said test, what exactly are the "Timing cached reads"?
                        The buffered disk reads are giving the numbers I'd expect to see for reading from an HDD, but those cached read numbers are way too high for actual read speeds off a drive/array...

                        Yes, conventional HDD's can't fully utilize SATA channels, but cost-wise you're still talking about HUGE differences! I figured this out for my boss recently upgrading his server & laptop. The more expensive 2.5" drive ran $80 for 500GB, the 3.5" was $60 or $70 for 1TB (don't remember if I got it on sale or not). Compare that to an SSD that's $200 for 80GB, and you're talking about $2.5/GB instead of 6-8 GB/dollar. Nowhere near worth it yet, so really who cares? I mean, if you want performance, buy 4 standard SATA drives and RAID0 them for each SSD drive, you'll get WAY more capacity and even better overall throughput, though your latency would still be worse.

                        And hopefully this weekend, I'll get to actually INSTALLING Kubuntu. >

                        Oh, and just for laughs, you guys should read this article and then the comments about it (not too much reading). It's hilarious to me.
                        http://www.zdnet.com/blog/murphy/lin...ndows-lite/858
                        &quot;Were you killed?&quot;<br />&quot;Sadly, yes... But I LIVED!&quot;<br /><br />Antec Two Hundred case<br />Gigabyte GA-EP45-UD3P<br />Intel Core2 Duo E8400<br />4GB Kingston DDR2-1066<br />2x320GB WD Caviar (RAID0, OSes)<br />2x1TB Hitachi Deskstar (RAID0, Programs/Data)<br />1x2TB Hitachi Deskstar (Backup)<br />Gigabyte ATI Radeon HD5770<br />5x120mm fans

                        Comment


                          #27
                          Re: New to Linux - RAID installation

                          My next system (read 2 years unless this one dies soon ) I'm planning dual 240gb PCI-e x4 cards instead of hard drives and a 6GB/s hard drive for backup.

                          The PCI-e bus cards are WAY faster than the 2.5" replacement drives by a factor of 4 up to 10 depending on what you spend.

                          As far as my little test - I suspect of I had used two RAID0 devices rather than 1 RAID0 and 1 non-RAID I wouldn't have lost as much speed. Interesting test though.

                          Please Read Me

                          Comment


                            #28
                            Re: New to Linux - RAID installation

                            Yeah, I agree that if you'd used 2x RAID0 arrays under that RAID1, it probably would've performed much better.

                            I think I need to get more info about how to actually perform the setting up of the RAID, the command to use, how to even find it (cause I was having problems with that), and an example of the syntax I'd need (though I'm aware of doing "command -man" to get info). There've been several commands mentioned, but I think DMRAID is the one I'm supposed to use... Right?

                            Also, since the RAID setup in Linux is tied to the OS, do I set up the RAID during the install of Linux, or before, or How do I do so during the setup? Also, what IS the difference between the Linux CD's: alternate, desktop, etc? These are the biggest reasons I still haven't installed Linux, cause I'm just not sure what I'm supposed to do here... And I haven't had time!
                            If that is asking too much, then I apologize, and I can start trying to mess around with getting it installed and see if I can figure it out. Even if it isn't laid out step by step for me, I really just don't have any idea how to begin for now.

                            Really, thanks to all of you who've offered your insight and assistance on this. Particularly oshunluvr, who's offered much of his time thinking about my questions and comments (starting to scare me that you're thinking about me at night... lol), and done testing of his own to help out me (and all of us) with information on various aspects of RAID performance and such.

                            This community is definitely a friendly, helpful one.
                            &quot;Were you killed?&quot;<br />&quot;Sadly, yes... But I LIVED!&quot;<br /><br />Antec Two Hundred case<br />Gigabyte GA-EP45-UD3P<br />Intel Core2 Duo E8400<br />4GB Kingston DDR2-1066<br />2x320GB WD Caviar (RAID0, OSes)<br />2x1TB Hitachi Deskstar (RAID0, Programs/Data)<br />1x2TB Hitachi Deskstar (Backup)<br />Gigabyte ATI Radeon HD5770<br />5x120mm fans

                            Comment


                              #29
                              Re: New to Linux - RAID installation

                              Originally posted by oshunluvr
                              My next system (read 2 years unless this one dies soon ) I'm planning dual 240gb PCI-e x4 cards instead of hard drives and a 6GB/s hard drive for backup.

                              The PCI-e bus cards are WAY faster than the 2.5" replacement drives by a factor of 4 up to 10 depending on what you spend.

                              As far as my little test - I suspect of I had used two RAID0 devices rather than 1 RAID0 and 1 non-RAID I wouldn't have lost as much speed. Interesting test though.
                              Oh, I forgot I wanted to ask about the PCI-e cards you mentioned... Are they SSD's connected via PCI-E, or something else? I've heard of PCI storage devices, but I've never actually researched them so I'm completely unfamiliar... That performance factor would be impressive though!
                              &quot;Were you killed?&quot;<br />&quot;Sadly, yes... But I LIVED!&quot;<br /><br />Antec Two Hundred case<br />Gigabyte GA-EP45-UD3P<br />Intel Core2 Duo E8400<br />4GB Kingston DDR2-1066<br />2x320GB WD Caviar (RAID0, OSes)<br />2x1TB Hitachi Deskstar (RAID0, Programs/Data)<br />1x2TB Hitachi Deskstar (Backup)<br />Gigabyte ATI Radeon HD5770<br />5x120mm fans

                              Comment


                                #30
                                Re: New to Linux - RAID installation

                                Oh, holy crap those are expensive still! Over $600 for a 240GB PCI-E SSD... That OCZ RevoDrive sure has some awesome performance specs, though!

                                http://www.ocztechnology.com/product...ress-ssd-.html
                                http://www.pcper.com/article.php?aid=913
                                &quot;Were you killed?&quot;<br />&quot;Sadly, yes... But I LIVED!&quot;<br /><br />Antec Two Hundred case<br />Gigabyte GA-EP45-UD3P<br />Intel Core2 Duo E8400<br />4GB Kingston DDR2-1066<br />2x320GB WD Caviar (RAID0, OSes)<br />2x1TB Hitachi Deskstar (RAID0, Programs/Data)<br />1x2TB Hitachi Deskstar (Backup)<br />Gigabyte ATI Radeon HD5770<br />5x120mm fans

                                Comment

                                Working...
                                X