Announcement
Collapse
No announcement yet.
Btrfs File-System For Old Computers?
Collapse
X
-
Originally posted by drag View PostIt depends on what you want. For some people running 'noatime' is unacceptable because the atime is data that is valuable for their specific purpose.
Btrfs is going to be a bit slower then Ext4 generally. Ext4 is a very fast FS, despite what the naysayers think. Especially when it comes to pure database purposes... the database uses directio which is something ext4 can be fantastic at...
That being said due to the features and extra levels of protection btrfs can offer then it's probably worth it to use it in the future. When btrfsck comes out then I will start taking btrfs more seriously.
As for data protection, although ext4 doesn't protect the data as strongly as zfs/btrfs (again, potentially), it does provide fairly strong protection to the journal which, while should make it harder for the system to become corrupted (if performance isn't important I guess you could make the journal writethrough and that should provide additional data guarantees). Additionally, it has an online defragger, but as it is experimental, I've been hesitant to try it, but it should help keep to maintain performance over machines with very long uptimes.
If you want data guarantee this paper (http://www.google.com/url?sa=t&rct=j&q=ext4%20data%20checksum&source=web &cd=4&ved=0CDgQFjAD&url=http%3A%2F%2Fpages.cs.wisc .edu%2F~bpkroth%2Fpapers%2Fext4parity.pdf&ei=yGi8T qvrL-n40gHrtMnABA&usg=AFQjCNEWwXXtsREjCYjMCQYa65S9GZFp1 A&cad=rja) indicates that changes to the MD layer should provide the type of data guarantees zfs has while maintaining the separation of duties of a file system driver from the io layer.
Regarding btrfs being slow it SEEMS as if it would only be slow when writing, but due to the extra work involved with finding good layout schemes when writing, reading should be quite fast. I'd be interested in seeing the some phoronix benchmarks of a mostly full fs when using both btrfs and ext4.
Leave a comment:
-
Originally posted by deanjo View PostData loss.
Leave a comment:
-
Originally posted by deanjo View PostIt would run on 8 megs if you used the /nm install option.
Leave a comment:
-
Originally posted by oliver View PostA core2duo isn't old! Now a 4500 (or was it 4200?) RPM ide laptopdrive, now that is old!
(Typing this from a T42 with a ide disk)
I'm a little bummed that the most important 'safety' option has such a huge impact on performance Hopefully this will be fixed soon, as I'm going to be using this laptop for hopefully 2-3 more years.
Leave a comment:
-
Originally posted by TeoLinuX View PostDoes the PowerMac G4 push OS X?
I learned something new!
I believed it worked only on x86 machines. I've never been a mac user and Wikipedia busted my false belief
--> it worked on it until Leopard 10.5 wow
any mac user can tell whether it chokes the hardware, or is it responsive? Leopard on a G4, I mean
I have more issues with more modern software that is poorly ported, though you get a few gems like the unofficial port of Firefox, TenFourFox, which has builds for several PowerPC CPUs.
Leave a comment:
-
Originally posted by devius View PostGot to dust off my trustworthy 486DX2-66 to see what a really old computer can do
It would be nice to see some real world tests done by a human, like time to compress a file(s), boot time, application start-up time... you know, stuff that actually matters.
You're right about what should be counted as real stuff. Did you noticed: "auto-defrag" behaved bad, because auto-defrag is the way it is used today for a desktop system. Why it was not a fragmenter test done before to simulate disk usage!
But in the last time did you noticed a relevant benchmark so far running here? Made by a good methodology? Not being either spectacular journalism, like showing off a new hardware feature (like OpenGL 3.0 in a benchamark based on a driver support) or a software feature (adding LLVM to Mono could speedup in some benchmarks), either too deep in what results means.
Leave a comment:
-
btrfs with default settings and 3.0.x kernel is even a joke with an intel i7. all i wanted to do was to create a new image using debian live. that was so extremely slow that i wiped out the partition after the time which would usally be enough to create an image but it was still creating the chroot with btrfs. ext4 is the way to go as long as the defaults are so stupid. its funny that the only option that gives reasonal speed in that benchmark is the least recommended one, did not try it however yet.
Leave a comment:
-
Got to dust off my trustworthy 486DX2-66 to see what a really old computer can do
It would be nice to see some real world tests done by a human, like time to compress a file(s), boot time, application start-up time... you know, stuff that actually matters.
Leave a comment:
Leave a comment: