Announcement

Collapse
No announcement yet.

Dual Boot w/ AMD RAID

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by deanjo View Post
    Sorry but there is a legitimate use and it is for what he is doing, dual boot systems. It is the only way without going to a hardware based raid setup. As far as migration goes, migrating to another board is usually not a problem if it is utilizing the same brand of chipset. Doesn't even have to be the same exact chipset. I've personally migrated fakeraids successfully with no issue in such cases as silicon image to other silcon image controller same with nforce chipsets (Nforce3 -->Nforce 570--->Nforce 780a) with 0 issues. As far as "slowness" goes, the speed is bang on identical to a software raid (some times even a weeee bit faster) Any hardware solution your going to have issues migrating from one brand of controller to another brand of controller. Since he is running Raid 1, he is not after a speed up but redundancy which fakeraid is more then capable of and still allows him to dual boot with that redundancy.
    Sorry, no, that is not a legit reason for fakeraid. You don't need fake or hardware raid to dualboot two different operating systems. You simply set up your SOFTWARE RAID as normal in both OS's.

    Comment


    • #12
      Originally posted by droidhacker View Post
      Sorry, no, that is not a legit reason for fakeraid. You don't need fake or hardware raid to dualboot two different operating systems. You simply set up your SOFTWARE RAID as normal in both OS's.
      Sorry but your just spewing the usual "fakeraid sucks" elitist views that take the "must have everything or bust" which doesn't count for most consumers wanting raids offerings. There is also one other advantage of a BIOS raid over software and that is that you don't need a working OS to rebuild the array.

      Comment


      • #13
        I can see that people feel a bit strongly about their positions on fakeraid here...

        I figured I'd follow up and let everyone know my findings

        First, my partition setup:
        /dev/sd[ab]1 - Windows 7 - NTFS - ~40GB
        /dev/sd[ab]2 - Ubuntu - Ext3 (or 4, don't remember) - ~40GB
        /dev/sd[ab]3 - Data Drive - NTFS - ~560GB

        If it was as simple as using Windows-based RAID on partition 1, and software raid on partition 2, I'd have been mildly tempted to do that. The data drive, which is shared between Windows and Linux, is another matter. That disk needs to be writable by both OSes without either corrupting the mirror. I have no clue if it's possible to convince both Windows and Linux to use their own respective software-RAID schemes on the same partition without them interfering with each other.

        I did a bit of reading last night and determined that Ubuntu 10.04 supports dmraid on its desktop install disk, but my upgraded Ubuntu (originally 9.04) install didn't have dmraid installed. When I first tried to activate dmraid from the Ubuntu install CD, I got an error about "Error: dos: partition address past end of RAID device". It turns out that my partitions were created before the RAID1 array was created, and so /dev/sd*3 extended into the RAID controller metadata. I ended up doing steps 1-6 below to fix that.

        Simplified list of steps taken (there were many dead-ends in the discovery process):
        1) Destroy the RAID1 array in the BIOS. The data is left intact by the controller.
        2) Boot from Ubuntu 10.04 install CD.
        3) Use GPartEd to resize /dev/sda3 so that there's free space at the end of the drive. I had been encountering a dmraid error saying "Error: dos: partition address past end of RAID device". I avoided this by downsizing the partition.
        4) Use dd to mirror /dev/sda to /dev/sdb
        5) Reboot and re-enter the controller BIOS.
        6) Recreate the RAID1 mirror with both drives as members.

        7) Boot using Ubuntu 10.04 install CD.
        8) Run 'sudo dmraid -ay' to get device mapper to add device mapper entries for RAID members to /dev/mapper.
        9) Mount '/dev/mapper/pdc_*2' as /mnt/root. pdc_*2 should be substituted for whatever your linux root partition is in /dev/mapper/.
        10) Bind /dev to /mnt/root/dev, and do similar with /sys and /proc. Also copy /etc/resolv.conf to /mnt/root/etc.
        11) chroot to /mnt/root
        12) apt-get install dmraid
        13) If all goes well, the dmraid install will update your Grub2 grub.cfg to boot from the RAID device. If not, tweak /etc/grub.d/* and run update-grub manually.
        14) Make sure that /etc/fstab mounts partitions by UUID, or that the device names are correct (/dev/mapper/* instead of /dev/sd*).
        15) Reboot to OS of choice.
        16) If you want to be sure about RAID mirror consistency, perform a Synchronization check from the Windows AMD RAIDXpert tool.


        As to those who are arguing about performance drawbacks and such, I've had no problems with performance on this system so far. I am aware that read errors on the first drive can still bring the system down, but at least leave the second drive intact. In the future, I may consider a hardware RAID card, but for now, I'm only really interested in the redundancy aspect. I want to minimize my recovery time in the event of a hard drive crash, but I also perform semi-regular backups to an external disk.

        With the current system I'm using, I can write to any mounted partition on my system from any OS that is installed and still maintain the integrity of my backup, which is all I'm really asking for.

        Comment

        Working...
        X