Definition of small files?
Could you add benchmarks that resemble the real world a bit more!? ..1MiB files are not small files.. 1KiB files are.. so extract the firefox source og linux kernel source to the drive and copy it out again and benchmark those. That's a real scenario, that would probably uncover weaknesses in Ext4!
I'm moving my installation to a new Vertex4 SSD right now from an old 250GiB WD. Both have Ext4 filesystems hdparm claims ~65MiB/second on the WD Sata drive, but copying the mozilla source gives me 500KiB/second.. Far from your fantasy theoretical numbers!
Hoping for more realistic benchmarks in the future!
regards,
Brunis
Announcement
Collapse
No announcement yet.
Ubuntu 12.10 File-Systems: Btrfs, EXT4, XFS
Collapse
X
-
A IOzone test result picture was used twice
On the bottom of Page 2, the result picture from the Test IOzone v3.405
(Record Size: 1MB - File Size: 8GB - Disk Test: Read Performance) was added two times.
A simple Copy & Paste failure?
Can only be a hardware failure of the keyboard...
Leave a comment:
-
Ubuntu <> Linux
Originally posted by TobiSGD View PostNot an Ubuntu user, so a small question: Do I have to expect that the Ubuntu kernel will be patched in a way that I can see differences in filesystem performance compared to a stock kernel from kernel.org? If so, wouldn't it make sense to also put the stock kernel into those benchmarks?
Or is there any other reason to test specifically the Ubuntu kernel and not a stock one, besides having Ubuntu in the title of the article for getting more page-hits from Ubuntu users?
But on your specific point, I agree: I want to know how the file systems are currently running on the latest Linux kernel, not how they run on Ubuntu 12.10 beta and whatever kernel it's using.
Leave a comment:
-
Originally posted by highlandsun View PostOf course, in a dedicated database deployment, I would just have a single DB residing in a dedicated filesystem, and preallocate all of the space for the DB file(s). At that point, metadata updates are irrelevant, they would only be occurring for the mtime stamps and not for any structural changes, so FS structural corruption would be impossible.
Leave a comment:
-
I believe that fsck is as essential as it was with the previous generation of filesystems.
Leave a comment:
-
Originally posted by jabl View PostSure, a filesystem which doesn't support barriers (such as JFS or ext2) will obviously outperform one which does (e.g. ext4, btrfs, xfs) on a test which tests synchronous writes (fsync()). In order to avoid an apples to oranges comparison, you need to either
- Disable the disk write cache when using a filesystem without barrier support (slower but safer).
- Disable barriers on the filesystems with barrier support (mount with barrier=0) (fast but unsafe).
- Or even better, use a device with a non-volatile write cache (e.g. a RAID card with battery backed cache) (fast AND safe).
Of course, in a dedicated database deployment, I would just have a single DB residing in a dedicated filesystem, and preallocate all of the space for the DB file(s). At that point, metadata updates are irrelevant, they would only be occurring for the mtime stamps and not for any structural changes, so FS structural corruption would be impossible.
Leave a comment:
-
Originally posted by highlandsun View PostYou should have also tested JFS. In my database tests, JFS outperforms all the others
- Disable the disk write cache when using a filesystem without barrier support (slower but safer).
- Disable barriers on the filesystems with barrier support (mount with barrier=0) (fast but unsafe).
- Or even better, use a device with a non-volatile write cache (e.g. a RAID card with battery backed cache) (fast AND safe).
Leave a comment:
-
You should have also tested JFS. In my database tests, JFS outperforms all the others. Also, you should do a test with two HDDs (or a combination of HDD and SSD) where the journal for the main filesystem is stored on the 2nd device. Again, in my tests this can make a huge difference in overall throughput.
My results are posted here:
Leave a comment:
-
Originally posted by devius View PostI'm 99,9% sure that's the case simply because physically the HDD in this test would never be able to achieve such high random write values. Even the fastest consumer HDD (Velociraptor 1TB) wouldn't be able to achieve even 1/10 of those 35MB/s.
Leave a comment:
-
Dpkg is incredibly, horribly, painfully slow with btrfs due to fsync calls. Would have been nice to see that added to the benchmarks.
Leave a comment:
Leave a comment: