Testing Out Btrfs In Ubuntu 10.10
Phoronix: Testing Out Btrfs In Ubuntu 10.10
Yesterday we reported that Ubuntu 10.10 gained Btrfs installation support and since then we have been trying out this Btrfs support in Ubuntu "Maverick Meerkat" and have a fresh set of Btrfs benchmarks to serve up.
Mmmh, still no cpu usage stats.
Can someone illuminate me, where does brtfs come from and what's the main difference from ext3/4 so that it appears to perform someway better than ext4?
We really need to see CPU charts to see if 'this' is better then 'that'.
I know compression takes CPU time, it has to. Now the question is how much?
Think about Atom CPU's and a FS that uses compression, ouch.
Would be nice to have near real time CPU charts with every test.
One test could show something that is on the slow side, but uses 2x less CPU time.
And I dont think it would be to hard to do RAM usage on top while your doing CPU.
Hi, I'm curious about the transparent compression - I searched google but didn't find a lot of useful information on it (just that it is zlib compression). I'm curious, does it actually the files stored on the hard drive smaller? And if so, then wouldn't it's performance be highly dependent on the type of file and how much it can be compressed? (Like media files performing badly, while txt files doing well?)
And as the others have asked, I would guess the CPU usage is very important with compression enabled. And along with the CPU usage, would it support parallel processing? Because I rarely use both the cores on my computer since most tasks can't, and if can harness the processing from the core that would have been idle anyway during a single-threaded task, then that would be awesome as well.
Excited for a new FS!
Btrfs is intended to be more scalable and flexible than ext3/4. Essentially, ext4 is a stop-gap measure to get some more out of the ext design so that Btrfs has time to mature. Some of the things Btrfs supports and ext4 does not include quick and easy snapshots (better than LVM snapshots, since they know about the filesystem structure), online defragmentation, and online growing and shrinking. Btrfs, once complete, should do pretty much everything that ZFS does and some things that ZFS doesn't.
The compression does make the files stored on the disk smaller, and its effectiveness does depend on the type of file.
A Short History of btrfs
“A short history of btrfs” (LWN.NET July 22, 2009) by Valerie Aurora (formerly Henson) is available at http://lwn.net/Articles/342892/
That's a great article on btrfs, everyone should read it.
Originally Posted by Blue Beard
btrfs (btree file system) really isn't being created to increase performance over existing file systems, the idea is to get a bunch of really cool new features, and try to make it all optimized enough to keep it from slowing down.
Do you really expect ZFS development to freeze in time, until BTRFS plays catch up? Just recently ZFS Dedup was added. I wonder what functionality will be added in a couple of years.
Originally Posted by waucka
BTW, it takes decades to iron out all bugs in a file system. It takes at least 5 years after announcing v1.0 before anyone use it in production. ZFS was officially announced after development in secrecy, and after that it took several years before it was let into production. When BTRFS is v1.0 it will take several years before any trusts it, in production.
As someone said "filesystems should not be sexy. It should be boring and trusted technology" - implying that he will not let ZFS into his computer halls, before at least 10 years has passed and ZFS has become mature enough.
Well, file systems are not bottle of wine, to stay in the corner and get better with time. They have to be used to get mature...
Originally Posted by kebabbert
I disagree --> It takes a very large amount of satisfactory user testing to get acceptance.
Originally Posted by kebabbert
Few users --> decades