just a thought...
Why is there such concern over which kernel has a small performance regression in it(iin a not for production use FS no less)? Can you not upgrade kernels in ubuntu/fedora/suse/etc? Does a vanilla kernel not work? If 2.6.35 is bad for the default FS of ubuntu/$DISTRO surely they would ship 2.6.34 or some other version? If you don't like changes, run one of the long term kernels, take your pick of older kernels listed as stable on http://www.kernel.org/ 2.6.34.x, 2.6.33.x, 2.6.27.x All of these receive backported fixes for bugs and security issues.
I'm sure i'm missing something as i switched to Gentoo some 7 years ago, as I grumpy at not being able to use a vanilla kernel with some DRM patches on redhat(it was redhat then) and suse. It sure would be nice if someone made a "make config" option for the kernel, but gentoo has genkernel and it tends to work. Do ubuntu/fedora kernels have config.gz support turned on? if so it should be very easy to rebuild a kernel. Although I'm guessing that ubuntu/etc use initramfs-es these days, making it a bit harder to make your own kernel. Is there a reason to always use the provided Ubuntu kernel? or is it imposable to use a non ubuntu packaged kernel?
Really though, I'm curious why it's always "THE SKY IS FALLING" sort of news related to some version/check-in of the kernel as it relates to ext4 or btrfs. Don't get me wrong, I like to see people testing new code, and if i had more time/hardware I would be as well.
C'mon Phoronix, you can do better. First of all, do these write benchmarks write random data or zeroes? Is the hardware doing compression on it's on anyway? And then you use a kernel in which you know that btrfs has regressed, and try to compare it to a filesystem which is maturing.
Try testing a wide range of kernels using btrfs and btrfs -o compress. Use normal hardware, that is, a HDD, and an SSD that does not use compression internally.
I agree. Don't test only Ubuntu in its default configuration.
Try Archlinux with only openbox also.
Try different kernels with different i/o and cpu schedulers like this:
(There's a good PKGBUILD for Archlinux)
I think that the benchmark gets a bit wrong conclusion: the compressed bzip will need to use some CPU to get the IO maxed out. So, in applications that are IO bound, with little CPU usage, like unpacking the Linux kernel, for big applications startup (that uses much less CPU than for example to compress it), it will get naturally better speed. For benchmarks that are both CPU and IO bound, the CPU starving will affect IO speed. Also the rotating media, which is slower on average than SSD will also have a much bigger unbalance between IO reads. At the end: the data that is compressed more will likely benefit from being extracted: startup of a Linux desktop that is bound of reading a lot of config files that are mostly not that CPU bound. So even the benchmark does not show that, probably using a desktop configuration and evaluating the GNOME session startup time may get some startup time improvement, when running a compiling benchmark (which most users don't do) will not be that favorable
I'm having trouble accepting that Dbench 4.0 12 client results for ext4. The max read and write speeds of that drive should be somewhere roughly in the ballpark of 280 MB/s. The benchmark result chart is claiming around 960 MB/s for ext4, which is far beyond what the hardware is actually capable of. Is this test showing some kind of problem with the in-memory caching (dcache) of btrfs vs ext4? Is it a bug in Dbench causing crazy results for ext4? The previous reviews I looked up for ext4 vs btrfs are all using Dbench with 1 client, not 12, so I can't easily compare to see how the new results measure up against the old or easily figure out what caused seemingly impossible results.
I agree here. For how long was this test run? 6 seconds or 10 minutes?
Originally Posted by elanthis
Besides the too little information I would be also interested in how btrfs compression performs on SLOW hard drives and whether intel atom is sufficient. The eee 1000h for example has such a slow hard drive. dd if=/dev/zero of=zerofile gives about 4,5 mb/s, but that's on my jfs filesystem. Didn't test it with my btrfs + compression, because it is broken at the moment (disk-io.c:739: open_ctree_fd: Assertion `!(!tree_root->node)' failed.), but I think I remember copying something big with constantly nearly 12 mb/s...
I don't really see the benefit of this article, in the knowledge that btrfs has a regression bug in 2.6.35 which is known to reduce performance by up to a factor of 10 in some workloads.
Originally Posted by thefirstm
Compiling vanilla kernels in RH/Fedora is no more an issue than in any other distro. Unpack the kernel source directory, make config/menuconfig/etc., then rather than just "make", do a "make rpm". This builds you an RPM package in the default package generation directory. rpm -ivh newkernel.rpm to install it.
Originally Posted by cynyr
The initial ramdisk image is EASY to deal with... there are several commands to manipulate them, the easiest of which is "mkinitrd /boot/initramfs-2.6.whatever.img kernelversion". You then add an entry to grub and you're off.
Fedora prefers dracut for initrd generation, I think. Syntax is nearly identical except it requires an option -f to force replace in case initrd already exists.
Tags for this Thread