Announcement

Collapse
No announcement yet.

Btrfs File-System For Old Computers?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • S.Pam
    replied
    Michael would you consider updating this test with 2019 HDD's and SSD's?

    Leave a comment:


  • liam
    replied
    Originally posted by drag View Post
    It depends on what you want. For some people running 'noatime' is unacceptable because the atime is data that is valuable for their specific purpose.

    Btrfs is going to be a bit slower then Ext4 generally. Ext4 is a very fast FS, despite what the naysayers think. Especially when it comes to pure database purposes... the database uses directio which is something ext4 can be fantastic at...

    That being said due to the features and extra levels of protection btrfs can offer then it's probably worth it to use it in the future. When btrfsck comes out then I will start taking btrfs more seriously.
    You're spot on about ext4 being a very performant fs (at least, potentially). That is the reason, I would hazard, that Google hired T'so. A lot of his work has been targeted at making ext4 scale to extremely large file systems, and it's paying off for everyone.
    As for data protection, although ext4 doesn't protect the data as strongly as zfs/btrfs (again, potentially), it does provide fairly strong protection to the journal which, while should make it harder for the system to become corrupted (if performance isn't important I guess you could make the journal writethrough and that should provide additional data guarantees). Additionally, it has an online defragger, but as it is experimental, I've been hesitant to try it, but it should help keep to maintain performance over machines with very long uptimes.
    If you want data guarantee this paper (http://www.google.com/url?sa=t&rct=j&q=ext4%20data%20checksum&source=web &cd=4&ved=0CDgQFjAD&url=http%3A%2F%2Fpages.cs.wisc .edu%2F~bpkroth%2Fpapers%2Fext4parity.pdf&ei=yGi8T qvrL-n40gHrtMnABA&usg=AFQjCNEWwXXtsREjCYjMCQYa65S9GZFp1 A&cad=rja) indicates that changes to the MD layer should provide the type of data guarantees zfs has while maintaining the separation of duties of a file system driver from the io layer.
    Regarding btrfs being slow it SEEMS as if it would only be slow when writing, but due to the extra work involved with finding good layout schemes when writing, reading should be quite fast. I'd be interested in seeing the some phoronix benchmarks of a mostly full fs when using both btrfs and ext4.

    Leave a comment:


  • sbergman27
    replied
    Originally posted by deanjo View Post
    Data loss.
    Barriers are being phased out. They turned out not to be worth the cost. Individual filesystems can accomplish the same thing with less cost simply by making the right calls at the right times.

    Leave a comment:


  • crazycheese
    replied
    Originally posted by deanjo View Post
    It would run on 8 megs if you used the /nm install option.
    If 30 real-time minutes are acceptable. I remember quite good how 486sx with 4mb of ram started warcraft 2 with min requirement of 8mb ram overcome via running w3.1 with swapping. Dos4gw swapping didn't work at all.

    Leave a comment:


  • DeepDayze
    replied
    Originally posted by oliver View Post
    A core2duo isn't old! Now a 4500 (or was it 4200?) RPM ide laptopdrive, now that is old!

    (Typing this from a T42 with a ide disk)

    I'm a little bummed that the most important 'safety' option has such a huge impact on performance Hopefully this will be fixed soon, as I'm going to be using this laptop for hopefully 2-3 more years.
    I have the same exact Thinkpad as well with a 60GB disk and 2GB RAM and a very dependable machine I'd also like to keep for a few more years so its good to see btrfs would be at least fairly usable on a machine like this one and hopefully the right optimizations can be found for a machine of this caliber so that btrfs can be a better FS

    Leave a comment:


  • Kivada
    replied
    Originally posted by TeoLinuX View Post
    Does the PowerMac G4 push OS X?
    I learned something new!
    I believed it worked only on x86 machines. I've never been a mac user and Wikipedia busted my false belief
    --> it worked on it until Leopard 10.5 wow

    any mac user can tell whether it chokes the hardware, or is it responsive? Leopard on a G4, I mean
    Depends on the version, 10.4 runs allot better then 10.5 does, can't justify upgrading the CPU from a single "Apollo 6" 7455/G4e 800Mhz to a dual CPU "Apollo 8" 7448 1.8Ghz(not dual core, 2x CPU on a daughter card), and flash a PC ATI FireGL X3 AGP with the ROM off a Mac X800XT. Could go with a 7800GS AGP with the rom off a Mac Nvidia 7800GT, but since the only GPU drivers for PPC Macs are the OSS ones the X800 will run much better under Linux then the 7800.

    I have more issues with more modern software that is poorly ported, though you get a few gems like the unofficial port of Firefox, TenFourFox, which has builds for several PowerPC CPUs.

    Leave a comment:


  • deanjo
    replied
    Originally posted by ciplogic View Post
    Windows 98 will not run if you don't have at least 16 M of RAM,

    It would run on 8 megs if you used the /nm install option.

    Leave a comment:


  • ciplogic
    replied
    Originally posted by devius View Post
    Got to dust off my trustworthy 486DX2-66 to see what a really old computer can do
    It would be nice to see some real world tests done by a human, like time to compress a file(s), boot time, application start-up time... you know, stuff that actually matters.
    With DX2 you could run just DOS and Windows 95, Windows 98 will not run if you don't have at least 16 M of RAM, and NT 4 will not run without 24 M of RAM.
    You're right about what should be counted as real stuff. Did you noticed: "auto-defrag" behaved bad, because auto-defrag is the way it is used today for a desktop system. Why it was not a fragmenter test done before to simulate disk usage!
    But in the last time did you noticed a relevant benchmark so far running here? Made by a good methodology? Not being either spectacular journalism, like showing off a new hardware feature (like OpenGL 3.0 in a benchamark based on a driver support) or a software feature (adding LLVM to Mono could speedup in some benchmarks), either too deep in what results means.

    Leave a comment:


  • Kano
    replied
    btrfs with default settings and 3.0.x kernel is even a joke with an intel i7. all i wanted to do was to create a new image using debian live. that was so extremely slow that i wiped out the partition after the time which would usally be enough to create an image but it was still creating the chroot with btrfs. ext4 is the way to go as long as the defaults are so stupid. its funny that the only option that gives reasonal speed in that benchmark is the least recommended one, did not try it however yet.

    Leave a comment:


  • devius
    replied
    Got to dust off my trustworthy 486DX2-66 to see what a really old computer can do
    It would be nice to see some real world tests done by a human, like time to compress a file(s), boot time, application start-up time... you know, stuff that actually matters.

    Leave a comment:

Working...
X