It is quite strange that ext4 becomes more and more slower when a new kernel goes out. I rememeber that ext4 is much faster than ext3 in kernel-2.6.2x. I can feel ext4 is much faster than ext3 in daily use, especially using computer for a long time.
Announcement
Collapse
No announcement yet.
File-System Benchmarks With The Linux 2.6.34 Kernel
Collapse
X
-
Thanks for nice test, but I dont really understand why are you testing FS on SSD, while >95% servers and desktops still have normal rotation HDD? Unless there is TRIM support in linux, it doesnt make much sense to use SSD in any server for heavy FS usage.
Comment
-
Originally posted by walkeer View PostThanks for nice test, but I dont really understand why are you testing FS on SSD, while >95% servers and desktops still have normal rotation HDD? Unless there is TRIM support in linux, it doesnt make much sense to use SSD in any server for heavy FS usage.
Yes, it's wikipedia (the greatest source in the universe... </sarcasm>), but it says 2.6.33 kernels and higher support TRIM. I found a few posts going back to early last year that mentioned people were already committing patches for preliminary TRIM support.
Comment
-
Originally posted by liangsuilong View PostWill Micheal test these file-system on HDD again? I am interested in it.Michael Larabel
https://www.michaellarabel.com/
Comment
-
Originally posted by liangsuilong View PostIt is quite strange that ext4 becomes more and more slower when a new kernel goes out. I rememeber that ext4 is much faster than ext3 in kernel-2.6.2x. I can feel ext4 is much faster than ext3 in daily use, especially using computer for a long time.
Remember, Phoronix benchmark articles typically focuses on the "out of the box" experience. As such potential optimizations are omitted. It is generally assumed by distros and kernel developers that your system will crash or loose power at any moment. They also assume that you will come after them with torches and pitchforks if your data is lost. So, the safer options very often become the defaults.
Comment
-
Originally posted by Delgarde View PostInteresting - for the most part, BTFS performs well, if not exceptionally so. But there are just a couple of tests - the database ones - where it lags hugely behind everything else. What makes it so much worse than the others at that particular usage pattern?
more modern FS begin to organise their data similar to databases themself, so not only is some of the effort by Postgre/mySQL wasted, it might actually hinder performance.
Just my guess. When Btrfs and other filesystems get more popular I`m sure the database software will be optimized for them.
Except for such special-purpose issues, Btrfs looks great for the general case.
Comment
-
Originally posted by Jimmy View PostAs I understand it, ext4 became safer and as a result became slower. Much of the safeties can be turned off with mount options to gain performance back. If your disks are battery backed you can gain the performance without much worry about your data by changing a few mount options.
Comment
-
The xfs filesystem used for the benchmark was using lazy counters or not?
It really makes a difference for metadata intensive tasks like kernel compile or high concurrency benchmarks.
the lazy counters option is relatively new and is default only since the 3.1.0 release of xfstools; it should be used when creating the xfs filesystem so the question is was the xfs filesystem created with a recent version of mkfs.xfs?
from xfs FAQ:
Work on the userspace packages has been just as busy. In mkfs.xfs the lazy superblock counter feature has now been enabled by default for the upcoming xfsprogs 3.1.0 release, which will require kernel 2.6.22 for the default mkfs invocation.
laptop / # xfs_db -r /dev/sda6
xfs_db> version
versionnum [0xb4a4+0xa] = V4,NLINK,ALIGN,DIRV2,LOGV2,EXTFLG,MOREBITS,ATTR2,L AZYSBCOUNT
Comment
-
Originally posted by movieman View PostExt4 still writes metadata before data, doesn't it? Battery backup won't help if it crashes between writing the metadata and writing the actual data.
Having said that, there are lots of people (like your typical home user) who care very much about their data, don't have their systems protected by batteries, and don't have elaborate file backup systems in play. These users also find ways of making crashes more likely. For example running out and buying a GPU from either ATI or NVidia and using the speedy binary only drivers (Don't flame, either vendor will significantly increase your likelihood of crashing. Its not like I'm expressing which vendor I believe is better at it ).
So the question becomes: Will ext4 eat my valuable data if I push the crash monsters out of the shadows? Will it eat it better than other file systems? Maybe.
According to the man page, data=ordered is the default:
"This is the default mode. All data is forced directly out to the main file system prior to its metadata being committed to the journal."
In ordered mode, only the metadata is protected by journaling meaning that the file system's internal data should always be in good order, but the actual file data could for example be corrupted by an incomplete write.
But then how do we end up with zero length files when we
fd = open("foo.new")/write(fd,..)/close(fd)/ rename("foo.new", "foo")<<<CRASH>>>>
if the file data is written before the meta data? By using a kernel older than 2.6.30 or using the non-default noauto_da_alloc mount option on 2.6.30 or later. (See mount(8) under Mount options for ext4, auto_da_alloc / noauto_da_alloc. Also the description for this commit)
From what I can tell, data is written before meta data by default for ext4 in recent kernels. So, while your data may still get eaten, it shouldn't be from metadata being written before data unless you have something like data=writeback,barrier=0 in your mount options.
Disclaimer: I'm not a file system engineer. I could be wrong. This is a "near as I can tell" type post.
Comment
Comment