Announcement

Collapse
No announcement yet.

OrangeFS Lands In Linux 4.6 Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Hi-Angel View Post
    So, it's RAID, right? Aren't BTRFS has already one built-in?
    Don't think in term of classical (local disk) filesystems.

    Think in terms of network filesystems: like samba/cifs, or NFS, or apple's netatalk...
    Your computer mounts a filesystem that is served over the network through a samba server.
    That works on a small scale (a small office with a dozen of workstations and a server).

    But that scale badly on the size of whole giant clusters (several hundred of nodes):
    if several nodes start to fight for the same file, or if a huge amount of nodes start hammering your poor Samba or NFS server, it won't hold the load nicely, and all the nodes will experience delays waiting to obtain exclusive access to some file.

    That's where *cluster* filesystems enter:
    they are designed for highly parallel access, from a massive number of node, with high throughput and quickresponse.
    Usually, for some increased performance, they use some specialised form of networking, like Infiniband, or 10Gbps ethernet *with DMA*, etc.
    They are able to have several server coordinating and spreading the load.
    Coordination is (supposed to be) fast, so nodes don't have locks or slowdown and can compute what they need without too much delays for data.

    Competitors to OrangeFS would be things like Lustre, Ceph, GlusterFS, GFS, IBM's GPFS, Google FileSystem, etc.

    Technologies varies (all nodes access to the same disks server over SAN with something like Fiberchannel, files are accessed over network not unlike NFS, all nodes are peer-to-peer, etc.) but all target the same kind of uses keys:
    hundreds of nodes, accessing peta-bytes of data, where performance means a lot.

    Comment


    • #12
      Speaking of which: Michael, is your server basement able to run a test collaboratively ?
      i.e.: have all the machine test a cluster file system at the same time ?

      I'm not speaking about workloads themselves for now, but simply can all your 60 machines, instead of running all a test on their own (to test the variation between hardware), instead test simultaneously several FS, all the same at the same time, to see how a system can sustain 60 nodes at the same time ?
      (to be able to bench, say, OrangeFS vs GlustreFS vs. Ceph vs Lustre)

      Comment


      • #13
        I the last couple of weeks we've been running a VM cluster over a fairly large (20TB) gluster cluster and I must admit that I'm pretty impressed by its performance.
        At least as far as I could test, running bonnie on a VM image stored in a gluster mount was 10-25% slower than running the same image stored on a bare-metal ext4 partition.

        I wonder how Orange will compare...
        oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
        oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
        oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
        Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

        Comment


        • #14
          What are the reasons to include such an optional filesystem one wouldn't boot from directly in kernel? Isn't that bloating the system for all those home PCs, Androids and whatnot with a filesystem code, they never will use?

          Comment


          • #15
            Originally posted by uldics View Post
            What are the reasons to include such an optional filesystem one wouldn't boot from directly in kernel? Isn't that bloating the system for all those home PCs, Androids and whatnot with a filesystem code, they never will use?
            I think you missing the point here. Read: OrangeFS is distributed file system that will never be used in Android phones...
            oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
            oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
            oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
            Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

            Comment


            • #16
              OK, so the Android is off. But my point is most of desktop users won't need it, as well as a good portion of servers. So, why couldn't it be made optional, modular. For me it would just be a deadweight. More deadweight, greater the surface to attack. Wasn't the kernel going for modularity? These aspects are puzzling me in this regard.

              Comment


              • #17
                Originally posted by uldics View Post
                OK, so the Android is off. But my point is most of desktop users won't need it, as well as a good portion of servers. So, why couldn't it be made optional, modular. For me it would just be a deadweight. More deadweight, greater the surface to attack. Wasn't the kernel going for modularity? These aspects are puzzling me in this regard.
                1. All file systems, ext4 included, are optional.
                2. You cannot 'attack' a file system that's not being used (and there are many dozzens of them in any Linux kernel installation).

                Seriously, you're way off.

                - Gilboa
                Last edited by gilboa; 18 May 2016, 06:24 AM.
                oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
                oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
                oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
                Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

                Comment

                Working...
                X