Announcement

Collapse
No announcement yet.

Dual Boot w/ AMD RAID

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Dual Boot w/ AMD RAID

    I'll start with the system specs:

    Gigabyte AM2+ 785G mATX motherboard
    2x Western Digital 640GB Caviar Blue
    Phenom X6 1055T
    4GB DDR2
    Radeon 4770

    Windows 7 (/dev/sda1, /dev/sdb1)
    Ubuntu 10.04 (/dev/sda2, /dev/sdb2)

    I have the two HDs set up in a chipset RAID 1 array, which is working fine in Windows. I have Grub2 installed as the primary boot loader.

    The problem: When I update grub for a new kernel install (e.g. 2.6.35 from mainline PPA), the update-grub process sees the new kernel, but when I reboot, I only get the pre-existing kernels as boot options.

    I'm pretty sure this is because Ubuntu is only writing the new kernel and grub configuration to one of the drives, but grub is booting from the other. When I list /dev/sd*, I get entries for both /dev/sda* and /dev/sdb*, which says to me that Ubuntu is only mounting one of the drives, and not respecting the RAID array I have set up.

    When I go into a grub command prompt at boot time, all I get is (hd0,*) listed, so it seems that grub only sees the primary drive in the array.

    The big question: Is there any way that I can force Linux to treat both drives as a true RAID 1 array while still leaving Windows 7 bootable using the chipset RAID setup? I've looked at a few cheap hardware RAID cards (well, as cheap as those things get), but I'd like to avoid throwing hardware at this if I can avoid it. Will using mdadm and the other raid tools only work for a Linux-bootable drive, or can I use this to get Linux to cooperate without interfering with Windows?

  • #2
    First of all your setup is wrong. U has absolutely no support for dmraid in the installer. You can NOT access the drives without dmraid (that means without using /dev/mapper/xxx). If you do that as you do everything must be wrong. If you want to remove your raid:

    sudo dmraid -Er

    and the meta data is removed, if you want to keep it use a distro that can handle dmraid. This should be Fedora or OpenSuSE.

    Comment


    • #3
      Yup simple answer here is if you want to keep your BIOS raid setup then switch to openSUSE or Fedora which support booting to such setups.

      Comment


      • #4
        Thanks Kano/deanjo,

        I'll have to give some thought to this one. I'm not sure if I want to switch to SUSE/Fedora at this time, or whether a hardware RAID card (w/ cache and potential for RAID 5) would be the better solution for me. Time to install a few new VMs and play around for a bit.

        Comment


        • #5
          Originally posted by Kano View Post
          First of all your setup is wrong. U has absolutely no support for dmraid in the installer. You can NOT access the drives without dmraid (that means without using /dev/mapper/xxx). If you do that as you do everything must be wrong. If you want to remove your raid:

          sudo dmraid -Er

          and the meta data is removed, if you want to keep it use a distro that can handle dmraid. This should be Fedora or OpenSuSE.
          I've started looking into dmraid a little bit, and it looks like Ubuntu added dmraid support in the installer for 10.04 LTS. I'm going to have to give this a try tonight (after a full backup of course).

          Comment


          • #6
            You really should lose the fakeraid. It has all the advantages of NOTHING and all the disadvantages of both hardware AND software raid. Pick one or the other -- not the retarded illegitimate offspring of both.

            With hardware raid, you get the performance advantage. With software raid, you get the portability advantage. With fakeraid, you get both SLOW as well as being locked in to a particular piece of hardware.

            Comment


            • #7
              Originally posted by droidhacker View Post
              You really should lose the fakeraid. It has all the advantages of NOTHING and all the disadvantages of both hardware AND software raid. Pick one or the other -- not the retarded illegitimate offspring of both.

              With hardware raid, you get the performance advantage. With software raid, you get the portability advantage. With fakeraid, you get both SLOW as well as being locked in to a particular piece of hardware.
              Sorry but there is a legitimate use and it is for what he is doing, dual boot systems. It is the only way without going to a hardware based raid setup. As far as migration goes, migrating to another board is usually not a problem if it is utilizing the same brand of chipset. Doesn't even have to be the same exact chipset. I've personally migrated fakeraids successfully with no issue in such cases as silicon image to other silcon image controller same with nforce chipsets (Nforce3 -->Nforce 570--->Nforce 780a) with 0 issues. As far as "slowness" goes, the speed is bang on identical to a software raid (some times even a weeee bit faster) Any hardware solution your going to have issues migrating from one brand of controller to another brand of controller. Since he is running Raid 1, he is not after a speed up but redundancy which fakeraid is more then capable of and still allows him to dual boot with that redundancy.

              Comment


              • #8
                Well nvidia raid is a dead end now... At least Linux can access it on any plattform - you just can not boot from it.

                Comment


                • #9
                  Originally posted by Kano View Post
                  Well nvidia raid is a dead end now... At least Linux can access it on any plattform - you just can not boot from it.
                  Since it is essentially a clone of the intel raid I wouldn't be surprised if it would migrate to a intel chipset.

                  Comment


                  • #10
                    I highly doubt that.

                    Comment

                    Working...
                    X