Announcement

Collapse
No announcement yet.

Ceph Sees "Lots Of Exciting Things" For Linux 5.3 Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Ceph Sees "Lots Of Exciting Things" For Linux 5.3 Kernel

    Phoronix: Ceph Sees "Lots Of Exciting Things" For Linux 5.3 Kernel

    For those making use of the Ceph fault-tolerant storage platform, a number of updated kernel bits are landing in Linux 5.3...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    With BlueStore, CephFS has become a decent and versatile filesystem for Linux. With its scale-out capabilities, its a promising solution for people who may be starting off small but want to future proof themselves by being able to expand a filesystem infinitely across many servers eventually. It could be a solution from everything from a desktop user to a huge data center. In some ways its like ZFS but the filesystem can span multiple servers, not just multiple devices. It will be interesting to see what exciting new features are coming for CephFS.

    Comment


    • #3
      Good stuff. We're planning on building a Ceph cluster with 1U NVMe servers on 100GbE in the coming months, we expect to see a couple dozen GB/s throughput

      Comment


      • #4
        dont think ceph has an real open source competitors out there at the moment. lustre could come close but i think it is no longer mainlined. ibms gpfs or spectrum scale is there but thats proprietary. I would love to hear stories of large scale ceph use in production.

        Comment


        • #5
          Originally posted by anarki2 View Post
          Good stuff. We're planning on building a Ceph cluster with 1U NVMe servers on 100GbE in the coming months, we expect to see a couple dozen GB/s throughput
          Hi anarki2,

          Exciting stuff! Please do be careful about the NVMe drives you choose. High write endurance and fast O_DSYNC writes (usually hand-in-hand with power-loss-protection) is generally key. Also if you are going to load up 1U servers with lots of NVMe drives you are going to need as much CPU as you can get to drive IOPS.

          Comment


          • #6
            Originally posted by anarki2 View Post
            Good stuff. We're planning on building a Ceph cluster with 1U NVMe servers on 100GbE in the coming months, we expect to see a couple dozen GB/s throughput
            I did that on a BeeGFS Cluster with Optanes using Infiniband. That was quite powerful, next time I will do a test run on Ceph.

            Comment


            • #7
              Originally posted by Neraxa View Post
              With BlueStore, CephFS has become a decent and versatile filesystem for Linux. With its scale-out capabilities, its a promising solution for people who may be starting off small but want to future proof themselves by being able to expand a filesystem infinitely across many servers eventually. It could be a solution from everything from a desktop user to a huge data center. In some ways its like ZFS but the filesystem can span multiple servers, not just multiple devices. It will be interesting to see what exciting new features are coming for CephFS.
              For SOHO and filesystem-only uses, MooseFS/LizardFS are much lighter and easier to set up.

              Comment


              • #8
                GitLab is using Ceph as far as i know to host there repositories and even relational databases (Postgres)..

                Comment


                • #9
                  Originally posted by sarfarazahmad View Post
                  dont think ceph has an real open source competitors out there at the moment. lustre could come close but i think it is no longer mainlined. ibms gpfs or spectrum scale is there but thats proprietary. I would love to hear stories of large scale ceph use in production.

                  Comment


                  • #10
                    Originally posted by Nite_Hawk View Post

                    Hi anarki2,

                    Exciting stuff! Please do be careful about the NVMe drives you choose. High write endurance and fast O_DSYNC writes (usually hand-in-hand with power-loss-protection) is generally key. Also if you are going to load up 1U servers with lots of NVMe drives you are going to need as much CPU as you can get to drive IOPS.
                    Thanks for the pointers. SuperMicro's been understandably putting EPYCs in their NVMe storage servers (for the incredible 128 PCI-E lanes), so I think we'll be good to go.

                    The concept is to have many small, easy-to-replace, easy-to-scale-in-increments parts, hence the 1U form factor. These usually have 10 NVMe drives. You need more IOPS or space? Just add one more node and be done with it

                    After a quick glance something between EPYC 7251 and 7351 should do, Intel P4610 SSDs, plus 128-ish GB RAM, but picking the CPU is like stabbing in the dark, for RAM I still need to read through the Ceph planning guides. Any suggestions? Our current choice is AS-1113S-WN10RT, Thinkmate has a configurator for the parts and disks.

                    For networking we'll prolly settle with some Mellanox Connect-X dual 100G card and a 32x100G switch from FiberStore.
                    Last edited by anarki2; 19 July 2019, 05:47 AM.

                    Comment

                    Working...
                    X