Announcement

Collapse
No announcement yet.

Linux Summit will preview new advanced file system

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux Summit will preview new advanced file system

    Shamelessly ripped off from here:



    Although computers get bigger, run faster and accomplish more amazing feats all the time, they still store data in a 1970s-era file system. But that may be about to change.

    Speaking at the Linux Foundation End User Collaboration Summit this week, Ted Ts'o, a Linux Foundation fellow, and Chris Mason, Oracle's director of kernel engineering, will provide a sneak peak of the file systems of the future at the New York City brainstorming session, whose purpose is to foster interaction between leading Linux developers and the most advanced users and, in turn, to accelerate development of the Linux platform.

    The problem with contemporary file systems, Ts'o said, is that -- following Moore's Law -- file sizes have grown bigger, and disk drives have doubled in capacity every couple of years. While the file system error rate per megabyte has remained constant, the increase in volume has created performance and quality control problems for large data centers, which find data more difficult to manage, he said.

    In addition, data centers today want to be able to do things they didn't dream of in the 1970s, like merge data from multiple hard drives, Ts'o said. Another challenge is the switch from conventional hard drives to solid-state disks, which use less power and retrieve data at a uniform rate irrespective of location but have lower overall performance than hard drives, he said. So file systems today need to be adaptable to the hardware people want to use and how they actually use it, he said.

    But changing the file system to fix the scalability and functional limitations of ext3, the default file system in many popular Linux distributions, requires a significant education outreach. Because the consequences of data loss are so severe, data center managers are reluctant to trust their data to new file systems, Ts'o said. New-system information needs to be shared well ahead of time, including a roadmap of coming features so IT professionals know what to expect, he said. That's where the Linux Foundation's event hopes to make inroads.
    The default ext4 and beyond
    The Linux Foundation has pursued a two-pronged approach: (1) an incremental improvement over the current ext3 and (2) adoption of a completely new file system as a long-term remedy. The interim file system, ext4, has been in development for two years and over the next nine months will start to appear in the community editions of the top Linux distros first, he said. It will be an easy upgrade, he promised.

    Ext4 will have better performance and scalability than ext3 and is backward-compatible with 3 because it is built on the same code base; retaining the same code base, however, also limits how much it can be improved, Ts'o said.

    Consequently, in November 2007 a group of leading vendors, including Red Hat, IBM and HP met to evaluate several new file systems that could take Linux to the next level in enterprise data management. They agreed on btrfs, which was written from the ground up by Oracle's Mason based on his prior Novell work with the Linux-based reiserfs file system. Mason modified his original version to include the features requested last November and already has a working prototype on his laptop, which outperforms some current systems but isn't stable yet, Ts'o said.

    The new btrfs file system will be more convenient and robust than ext4, with some key features that couldn't be incorporated without starting from scratch, and it is expected to leapfrog Sun Microsystems Inc.'s ZFS file system on several fronts, he said.

    These new features are expected to include storage pools, writeable recursive snapshots, fast file checking and recovery, easy large storage management, proactive error management, better security, large scalability and fast incremental backup. In reality most users don't have databases large enough to require some of the most advanced functions but like so many technology battles, it comes down to bragging rights and engineering pride, he said.

    The cost of the undertaking, which is too expensive for any single company to underwrite, will be shared among a number of corporate backers, each of which will contribute to the task in various ways and in accordance with their abilities and fiscal constraints, he said.

    The roadmap for adoption includes alpha tests on laptops this year followed by alpha tests on servers next year, preview releases on community distros in 2010, incorporation in production OSes in 2011 and the start of enterprise adoption in 2012. Even if btrfs is "feature complete" in 2009, it will require extensive work to debug and tune for optimized performance before it is enterprise-ready, he said. Another critical step ahead is the addition of a user space file system consistency checker for file recovery, Ts'o said, adding that the last 20% of the project requires 80% of the effort.

    As for Microsoft, it is unlikely to incorporate ext4 or the btrfs file systems because of licensing issues, Ts'o said. Both the file systems (ext4 and btrfs) are licensed under the GNU General Public License, which is incompatible with proprietary code, he said. But in the future, other operating systems could write drivers to read the ext4 and btrfs systems if they decided to do so, he added.

    "This file system is just one of the new technologies flowing into Linux, including virtualization, compartmentalization and power management," Ts'o said. "It's too difficult to predict the future impact of ext4 or btrfs compared swith all the other kernel advances. I'm not going to try."
Working...
X