Announcement

Collapse
No announcement yet.

EXT4 Lets Us Down, There Goes Our R600/700 Mesa Tests

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by DanL View Post
    The only problem is that it's impossible to know your probability of data loss without a time machine.
    Or, probability = 1 if Murphy's Law is correct.

    Comment


    • #32
      It's becoming more tempting to switch to XFS, fall-back to EXT3, or just eagerly await the stabilization of Btrfs.
      So next time we'll see an article on how XFS ate your results? You do know that XFS is even worse in these kind of situation?

      Seriously, if you are doing benchmarks with graphic cards you know you are bound to get lock ups once in awhile...

      Comment


      • #33
        It's suspicious that the whole directory was purged. Could it be your testing software bug?
        Have you checked the lost+found directory? Have you checked the fsck output in the boot log?

        Comment


        • #34
          I'v had similar problems. Did a fresh install, upgraded the kernel to 2.6.32, tried some experimental radeon thing that hard locks, and had many files in many directories get screwed(including entire folders), along with an unbootable system. FSCK was going crazy, basically repeatedly fixing thousands of things (probably breaking it more). I don't understand how something like that passes QA testing. I was using a fakeraid setup, do not know if that makes it related or not.

          Comment


          • #35
            Originally posted by figvam View Post
            It's suspicious that the whole directory was purged. Could it be your testing software bug?
            Have you checked the lost+found directory? Have you checked the fsck output in the boot log?
            Oh yeah, and tons of left overs were in lost+found, obviously useless (some binary, some ascii, no filenames)

            Comment


            • #36
              Do you have any R300 an R400 cards to test them in these Mesa tests, too?

              Comment


              • #37
                Originally posted by Michael View Post
                Usually it serves its purpose fine and worked well back when the problem with EXT4 was the empty files on crashes.
                Thats funny my home server is on ext4 (and has been for over a year) and last night I had two power cuts. No PSU so bam power loss YET I ain lost any files.

                nice std mount options for /<root> and /home as well
                ...


                But saying that my "backup" actually revolves around critical data mirrored on my desktop as well as every 3months burnt to DVD's

                but keeping the data in the same folder, on the same partition, on the same drive, in the same machine sounds a much better backup scheme. I shall be employing such a scheme immediately

                Comment


                • #38
                  How about a Phoronix article comparing backup solutions for Gnome and KDE that are being actively developed?

                  Like Back-in-Time, TimeVault (active?), sbackup, git :-), etc...?


                  Not really useful for a "I write my own solutions"-nerd but definitely interesting for the average user.

                  Comment


                  • #39
                    Originally posted by DanL View Post
                    The only problem is that it's impossible to know your probability of data loss without a time machine.
                    and with a time machine, the probability is either 1 or 0

                    Comment


                    • #40
                      Originally posted by howlingmadhowie View Post
                      and with a time machine, the probability is either 1 or 0
                      With a time machine, isn't the probability of data loss exactly 0? I.e., you can always go back in time to retrieve the data.

                      Heh, on second thought, nope. The probability of data loss is equal to the probability that the resources required to run the time machine are less than the cost of reproducing the data.

                      Comment


                      • #41
                        If you screw up, blame someone else. It's ok, everyone does it once in a while...

                        Comment


                        • #42
                          I think a lot of people here (including the article) are not focusing on the bigger issue here. It's more important to find the reason for the 'hard lock' then it is for the data loss. With 'hard locks' happening on a system no file system is safe. The title could just as easily read "Radeon driver may cause hardlocks resulting in possible data loss" among many others.

                          As well people saying "xyz filesystem is stable because it works fine here" is really of no use. I notice some were using home servers and power outages as examples. Home servers are probably the least susceptible to data loss even with power outages as their writes are far and few vs their reads and operate in a relatively static scenario. Also I have yet to see any file system that guarantees data loss on a power outage so testimonials on their reliability have to be taken for what they are, personal experiences with no real hard proof of any scenario.

                          Comment


                          • #43
                            Originally posted by energyman View Post
                            not only that - all that people who attacked and blocked reiser4 because of 'layer violations' don't have problems with btrfs that does the same but much, much worse.
                            Heh, typical. This sort of crap happens all the time in the open source community, and they get away with it. People have such short attention/memory spans.

                            Comment


                            • #44
                              [qoute]
                              As well people saying "xyz filesystem is stable because it works fine here" is really of no use. I notice some were using home servers and power outages as examples. Home servers are probably the least susceptible to data loss even with power outages as their writes are far and few vs their reads and operate in a relatively static scenario. Also I have yet to see any file system that guarantees data loss on a power outage so testimonials on their reliability have to be taken for what they are, personal experiences with no real hard proof of any scenario.
                              [/quote]


                              What is hilarious is that somebody would want to choose XFS over Ext4 when the problem they are experiencing is dataloss on improper shutdown.

                              If you look at the history and current state of XFS development you'd quickly realize that is like swapping out new tires in your car when the problem is that your engine is constantly exploding into flaming debris.

                              Comment


                              • #45
                                Originally posted by drag View Post
                                What is hilarious is that somebody would want to choose XFS over Ext4 when the problem they are experiencing is dataloss on improper shutdown.

                                If you look at the history and current state of XFS development you'd quickly realize that is like swapping out new tires in your car when the problem is that your engine is constantly exploding into flaming debris.
                                Sure but even usage of XFS does not guarantee loss of data on power loss. With barriers enabled I had a server with a weak powersupply reboot spontaneously over 50 times within a 24 hour period and it was a high usage server with plenty of read / write operations every minute. Just a personal experience.

                                Comment

                                Working...
                                X