Any way you could include nilfs2 in these?
Announcement
Collapse
No announcement yet.
Ubuntu 12.04 LTS - Benchmarking All The Linux File-Systems
Collapse
X
-
Stay tuned for more benchmark results, including when testing each of the Btrfs mount options on Ubuntu 12.04.
Please test the same on a traditional HDD
best FS for SSD might not be the best for HDD as well
Comment
-
yes I had similar experineces with btrfs, getting slower and slower, and after I tried to update to ubuntu 12.04 beta I could not get it to boot anymore, not shure maybe it had something to do with efi shit I think that it was not btrfs related because it should have shown grub even if it could not access the btrfs partition because boot was on ext2, but it just said no device or something at boot try to fixing grub did not help at all, then I just reinstalled it and before that I backuped the home, but ok that was not btrfs fault I guess ^^.
efi shit sucks ^^.
I dont mean efi ^^ I mean the replacement of mbr, dont know the name right now ^^ I thinkt that was somehow the problem, I will only try that again if my system is unable to boot from good old solid 1000 year old mbr
But back to btrfs It also through out some mistakes when I tried to delete the old apt-btrfs images it sometimes just refused to do that and said something about errors. Because there is no fixing fsck, I will wait now a long time to go with that I think. BTW I even made some benchmarks extracting a linux kernel on the old and the new partition on the old btrfs I had at the end 15mb/s write performance (kernel extract) with ext4 (both with lvm) I have 80mb/s so btrfs should get some older before its long term usable, but again most important it did not loose the data in the volume.
sorry for my english, not long awake today
Comment
-
Originally posted by FourDMusic View PostHowever, it should be kept in mind that there is no single answer to global answer to this question. It will depend on the functions primarily performed on the storage device.
In the end, that's all this IO, transfer rates testing is about: letting anyone infer the results for their own use cases.
Comment
-
Raid Request Clarification
I guess my above post did not specify my interest is in relative raid performance using traditional hard disks. I suspect that software raid 6 with a hot spare is a reasonable choice for a home archival data server where your hot button issue is no data loss caused by disk failure. We are getting closer to Xeon Atom boards with 8 PCIe lanes that can be used for many sata ports. It would be nice to know that Btrfs raid rebuild performance is comparable to mdamd if you are using a low powered CPU from whoever.
Comment
-
Originally posted by malkavian View PostYes, there are "cheap" SSD disks nowadays, but they use MLC chips, that are slower and have a short life (short number or writes before fail). In 2 or 3 years you could have problems with a SSD MLC disk. SLC have a much longer life (large number or writes before fail) and are a lot quicker, but they are much more costly per Gb. http://en.wikipedia.org/wiki/Multi-level_cell
A standard number I've seen for older MLC technologies is 10000 write cycles, and if anything the current number is higher. That means any single cell can be rewritten 10000 times, and there's (say) 128GB of cells available. At 10000, that's about 1.2PB. That's 717GB of writes every day for five years, if the wear levelling is perfect. If you use 90% of the drive for constant content, so the rewrites only touch the remainder, it should wear out in five years at about 70GB/day, and even if you then use a very pessimistic 10x safety margin you're at 7GB of writes every day for five years. Realistically, it'll be fine.
Alternatively, intel claims a 1.2million hour MTBF on their 120GB SSD, or over 130 years. I have no idea how they came up with that number.Last edited by dnebdal; 17 March 2012, 12:36 PM.
Comment
-
Originally posted by malkavian View PostIf nowadays wear levelling is well implemented I suposse you are right
Comment
-
Originally posted by blackiwid View PostI dont mean efi ^^ I mean the replacement of mbr, dont know the name right now ^^ I thinkt that was somehow the problem, I will only try that again if my system is unable to boot from good old solid 1000 year old mbr
Not that it really matters as long as you can get your file system onto the disk at the desired position and length, and boot from it.Last edited by dnebdal; 17 March 2012, 01:06 PM.
Comment
Comment