Announcement

Collapse
No announcement yet.

EXT4 Lets Us Down, There Goes Our R600/700 Mesa Tests

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
    l8gravely
    Junior Member

  • l8gravely
    replied
    Originally posted by chithanh View Post
    Next time store test results remotely, eg. on NFS. That way a software or hardware failure on the test box will not cause loss of the test data.
    Absolutely seconded! Why you don't have a NAS box to store all your data on is beyond me. Sure, for individual test runs you want them on a local disk... but core data should be on mirrored and backed up disks.

    Belt and suspenders! But I suspect you know all this, and I'm sure you've already run into some of the problems with Ubuntu Lucid with a NFS home directory. It just sucks...

    John

    Leave a comment:

  • starchild
    Junior Member

  • starchild
    replied
    Hi Michael,

    Long time reader, but this I have to comment on.

    Now, this begs a thing I've really missed from all the filesystem tests on Phoronix: reliability. The single most important thing for any disk filesystem is to keep you data safely. I took this for granted more than 5 years ago when I decided to use XFS on my desktop machine, and lost my entire /home after a crash + hard reboot. Some digging revealed that this was quite normal as XFS was designed for UPS-backed computers. After that episode I switched to ext3 which has survived everything ever since.

    Really, who cares if a filesystem can create 150 or 200 files per second if one of them is likely to kill your data in case of a power drop or hard reboot?

    So, it would be totally sweet with a filesystem robustness test. Even nicer would be if some abuse were part of every filesystem test round, as benchmarks such as yours are one of the reasons some filesystem developers have started to compromise data integrity for performance.

    BTW: Love your X.Org coverage :-)

    Leave a comment:

  • deanjo
    Moderator

  • deanjo
    replied
    Originally posted by drag View Post
    What is hilarious is that somebody would want to choose XFS over Ext4 when the problem they are experiencing is dataloss on improper shutdown.

    If you look at the history and current state of XFS development you'd quickly realize that is like swapping out new tires in your car when the problem is that your engine is constantly exploding into flaming debris.
    Sure but even usage of XFS does not guarantee loss of data on power loss. With barriers enabled I had a server with a weak powersupply reboot spontaneously over 50 times within a 24 hour period and it was a high usage server with plenty of read / write operations every minute. Just a personal experience.

    Leave a comment:

  • drag
    Senior Member

  • drag
    replied
    [qoute]
    As well people saying "xyz filesystem is stable because it works fine here" is really of no use. I notice some were using home servers and power outages as examples. Home servers are probably the least susceptible to data loss even with power outages as their writes are far and few vs their reads and operate in a relatively static scenario. Also I have yet to see any file system that guarantees data loss on a power outage so testimonials on their reliability have to be taken for what they are, personal experiences with no real hard proof of any scenario.
    [/quote]


    What is hilarious is that somebody would want to choose XFS over Ext4 when the problem they are experiencing is dataloss on improper shutdown.

    If you look at the history and current state of XFS development you'd quickly realize that is like swapping out new tires in your car when the problem is that your engine is constantly exploding into flaming debris.

    Leave a comment:

  • libv
    X.Org Developer

  • libv
    replied
    Originally posted by energyman View Post
    not only that - all that people who attacked and blocked reiser4 because of 'layer violations' don't have problems with btrfs that does the same but much, much worse.
    Heh, typical. This sort of crap happens all the time in the open source community, and they get away with it. People have such short attention/memory spans.

    Leave a comment:

  • deanjo
    Moderator

  • deanjo
    replied
    I think a lot of people here (including the article) are not focusing on the bigger issue here. It's more important to find the reason for the 'hard lock' then it is for the data loss. With 'hard locks' happening on a system no file system is safe. The title could just as easily read "Radeon driver may cause hardlocks resulting in possible data loss" among many others.

    As well people saying "xyz filesystem is stable because it works fine here" is really of no use. I notice some were using home servers and power outages as examples. Home servers are probably the least susceptible to data loss even with power outages as their writes are far and few vs their reads and operate in a relatively static scenario. Also I have yet to see any file system that guarantees data loss on a power outage so testimonials on their reliability have to be taken for what they are, personal experiences with no real hard proof of any scenario.

    Leave a comment:

  • LinuxAffenMann
    Junior Member

  • LinuxAffenMann
    replied
    If you screw up, blame someone else. It's ok, everyone does it once in a while...

    Leave a comment:

  • droidhacker
    Senior Member

  • droidhacker
    replied
    Originally posted by howlingmadhowie View Post
    and with a time machine, the probability is either 1 or 0
    With a time machine, isn't the probability of data loss exactly 0? I.e., you can always go back in time to retrieve the data.

    Heh, on second thought, nope. The probability of data loss is equal to the probability that the resources required to run the time machine are less than the cost of reproducing the data.

    Leave a comment:

  • howlingmadhowie
    Junior Member

  • howlingmadhowie
    replied
    Originally posted by DanL View Post
    The only problem is that it's impossible to know your probability of data loss without a time machine.
    and with a time machine, the probability is either 1 or 0

    Leave a comment:

  • AliBaba
    Junior Member

  • AliBaba
    replied
    How about a Phoronix article comparing backup solutions for Gnome and KDE that are being actively developed?

    Like Back-in-Time, TimeVault (active?), sbackup, git :-), etc...?


    Not really useful for a "I write my own solutions"-nerd but definitely interesting for the average user.

    Leave a comment:

Working...
X