Announcement

Collapse
No announcement yet.

EXT4 Lets Us Down, There Goes Our R600/700 Mesa Tests

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    The cat ate my homework, heh. Blame ext4, that's the ticket.

    Comment


    • #17
      Sad to hear all that work is gone. Well, at least you can file a bug report.

      But: there is an ooold rule.

      Backups.

      Do them on physically separate data carriers.

      I learned that about 8 years ago during a HW failure. And I was happy that I had at least 50% of my data off that drive. I was in a backup process when it fubared

      And: Don't use unfinished file systems. I keep away from any of the new ones, may they have performace better, I do not care. Data safety is much more important. One can use them when they're ready.
      I just find it hilarious the see the many bugs and problems ext4 causes while it was announced to be the greatest since readily cut bread. Wasn't it iirc some of the devs of etx4 that were agains the Über-Filesystem reiser4 to go into kernel? And that's some years ago.

      Anyway.
      Hope you'll find the time to redo the benchmarks.

      Comment


      • #18
        That's a pity!
        One question: you're blaming ext4. How do you know it is not caused by some other unrelated bug?

        Comment


        • #19
          that is what you get trusting extX devs declaring their stuff 'stable'.

          ext4 should rot in the staging section of the kernel.

          Comment


          • #20
            Originally posted by Adarion View Post
            Sad to hear all that work is gone. Well, at least you can file a bug report.

            But: there is an ooold rule.

            Backups.

            Do them on physically separate data carriers.

            I learned that about 8 years ago during a HW failure. And I was happy that I had at least 50% of my data off that drive. I was in a backup process when it fubared

            And: Don't use unfinished file systems. I keep away from any of the new ones, may they have performace better, I do not care. Data safety is much more important. One can use them when they're ready.
            I just find it hilarious the see the many bugs and problems ext4 causes while it was announced to be the greatest since readily cut bread. Wasn't it iirc some of the devs of etx4 that were agains the Über-Filesystem reiser4 to go into kernel? And that's some years ago.

            Anyway.
            Hope you'll find the time to redo the benchmarks.

            not only that - all that people who attacked and blocked reiser4 because of 'layer violations' don't have problems with btrfs that does the same but much, much worse.

            Comment


            • #21
              Well that sucks. Results would have been interesting.

              Comment


              • #22
                The Phoronix Test Suite even keeps backup copies of the XML test results from previous runs in a separate file to fend off data loss problems like that, but alas they were stored in the same directory.

                lmao

                nice "backup" scheme you got going there, this also on a stripped RAID as well

                FYI I just had a powercut today and my home server (which is EXT4 for the last year) nicely handled it, no UPS either

                Comment


                • #23
                  Originally posted by Naib View Post
                  lmao

                  nice "backup" scheme you got going there...
                  Usually it serves its purpose fine and worked well back when the problem with EXT4 was the empty files on crashes.
                  Michael Larabel
                  http://www.michaellarabel.com/

                  Comment


                  • #24
                    Backups are just a waste of resources. EXT4 developers need to code right.

                    Comment


                    • #25
                      It's on the same hardware, it's not a back-up. You should know that.

                      Comment


                      • #26
                        Right in time for 2.4.0 I finished off a phoromatic.upload-results option that allow any test results to be uploaded to a Phoromatic account, if the account holder has enabled "Allow Phoromatic test systems to upload unscheduled test results."
                        Michael Larabel
                        http://www.michaellarabel.com/

                        Comment


                        • #27
                          For all those that are screaming "BACKUPS", please remember that backups only make sense if the ratio

                          (Probability of data loss * Cost of data loss) / (Cost of backup)

                          is greater than one. So it really comes down to how likely you think a hardware or software failure is. Reasonable assumptions to me are 1% failure probability and maybe a half hour to set up the backups, so this means that if it takes less than 50 hours of actual work to run the tests, you'd be wasting your time making backups.

                          Comment


                          • #28
                            Originally posted by Michael View Post
                            Or will just build the mentioned support into Phoromatic, much cleaner, easier, and more efficient that way.
                            A good idea, or even writing the data to an external drive

                            Comment


                            • #29
                              Originally posted by DeepDayze View Post
                              A good idea, or even writing the data to an external drive
                              It just seems to me that blaming the file system is a bit premature. It could just as easily be a controller driver bug or hd firmware issue.

                              Comment


                              • #30
                                Originally posted by bugmenot View Post
                                For all those that are screaming "BACKUPS", please remember that backups only make sense if the ratio

                                (Probability of data loss * Cost of data loss) / (Cost of backup)
                                The only problem is that it's impossible to know your probability of data loss without a time machine.

                                Comment

                                Working...
                                X