Announcement

Collapse
No announcement yet.

EXT4 Lets Us Down, There Goes Our R600/700 Mesa Tests

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • EXT4 Lets Us Down, There Goes Our R600/700 Mesa Tests

    Phoronix: EXT4 Lets Us Down, There Goes Our R600/700 Mesa Tests

    For the past several days benchmarks have been going on a plethora of ATI Radeon HD 2000/3000/4000 (R600/700 generation) graphics cards as well as some of the older Radeon X1000 (R500) hardware for reference. All of this testing has been done with the current open-source ATI driver stack with Mesa to show where the performance is at for the H1'2010 Linux distributions...

    http://www.phoronix.com/vr.php?view=Nzk0OA

  • #2
    Alright already. We get it. Ext4 sucks. Quitcherbitchin' about it in every other article.

    Comment


    • #3
      reisub

      I can't guarantee that it would have worked in this instance, but way back when I still used to get hard locks with my ati card the alt+sysrq+reisub sequence saved me quite a lot of data, especially the s part.

      Comment


      • #4
        *cough* back-ups *cough*

        Sorry, as a Hell-Desk worker, I can't find much sympathy with any techie loosing more than a days data.

        Comment


        • #5
          sucks about the lost tests though. i noticed data loss / corruption problems in the early 2.6.32-rcs, but haven't had a problem since. (of course most likely at the cost of performance.

          Perhaps ext3 will be a better option until btrfs is stabilized.

          Comment


          • #6
            Nice way (or not? heh) to prompt implementing remote export of PTS result data during testing. Great feature

            Comment


            • #7
              Actually, it may not be an actual EXT4 bug. I was doing some work a few days ago on a tmpfs (I'm also using .32 kernel) and the whole directory just disappeared, by itself, while I'm using EXT4 for a long time already, and I haven't lost a single file to it. And yes, I've done quiet a few hard reboots through this filesystem's lifetime. (Well, this one may be perhaps unrelated, I'm just saying that it may be a bug elsewhere.)

              Comment


              • #8
                Originally posted by RobbieAB View Post
                *cough* back-ups *cough*

                Sorry, as a Hell-Desk worker, I can't find much sympathy with any techie loosing more than a days data.
                Yeah any testing that involves likely and repeated crashing and hard restarts means backups and fscks are mandatory IMO. I'd do it over a network and just write the results directly over NFS or rsync or something.

                If I'm bisecting a panic-inducing kernel commit, or testing unstable overclock settings with mprime i always backup beforehand. Same would apply to testing a boatload of video cards likely to cause an X lockup, though you can sometimes ssh in and reboot 'gracefully' from those.

                BTW: that has to be a pain swapping out all those cards. Are they hotswappable or something, or you use the kernel soft reboots?

                Comment


                • #9
                  Next time store test results remotely, eg. on NFS. That way a software or hardware failure on the test box will not cause loss of the test data.

                  Comment


                  • #10
                    Originally posted by Smorg View Post
                    BTW: that has to be a pain swapping out all those cards. Are they hotswappable or something, or you use the kernel soft reboots?
                    Just been turning it off, swapping the card, cleanly powering the system back up.
                    Michael Larabel
                    http://www.michaellarabel.com/

                    Comment

                    Working...
                    X