Announcement

Collapse
No announcement yet.

In-Kernel SMB3 File Server Looks To Land In Linux 5.15

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • In-Kernel SMB3 File Server Looks To Land In Linux 5.15

    Phoronix: In-Kernel SMB3 File Server Looks To Land In Linux 5.15

    One of the very first pull requests for Linux 5.15 now that its merge window is open following the Linux 5.14 release is to merge KSMBD, the in-kernel SMB3 protocol file server...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Will this negatively affect the security of the Linux kernel?

    Comment


    • #3
      GREAT. But does that mean that it needs RDMA-capable NICs ?
      Do any of the cheap NICs do RDMA ( Realtek etc ) ?

      Comment


      • #4
        Why not NFS?

        Comment


        • #5
          Originally posted by q2dg View Post
          Why not NFS?
          There are plenty of reasons for smb, primarily interoperability with Windows. Yes Windows supports nfs, but only in Pro or better editions I believe (at least that it how it used to be, haven't checked for years).

          Comment


          • #6
            Originally posted by uid313 View Post
            Will this negatively affect the security of the Linux kernel?
            Everything added to the kernel increases the security threat area, especially a network protocol.

            Originally posted by Brane215 View Post
            GREAT. But does that mean that it needs RDMA-capable NICs ?
            Do any of the cheap NICs do RDMA ( Realtek etc ) ?
            No, it does not need one, but could use the feature in future versions.
            No, consumer NICs do not usually have this feature.

            Originally posted by q2dg View Post
            Why not NFS?
            Because NFS sucks :P
            On a serious note, this is not replacing NFS, so use it if you dare.

            Comment


            • #7
              Originally posted by Brane215 View Post
              GREAT. But does that mean that it needs RDMA-capable NICs ?
              Do any of the cheap NICs do RDMA ( Realtek etc ) ?
              I'm not aware of any Realtek-based NIC implemented RDMA.

              If budget is a concern your best option is to go with used Mellanox InfiniBand cards or, not RDMA but iWarp, used Chelsio cards (maybe QLogic or SolarFlare with iWarp(?) may also show up). But you must double-check about your OS driver support and availability for download before making the plunge.

              I went with Mellanox whose VPI cards provide both InfiniBand (IB) and Internet Protocol over InfiniBand (IPoIB). Avoid the ConnectX-2 or earlier cards, they are really too ancient. ConnectX-3 cards are the deal because most of the data centers dump these cards to replace them with x4, x5, x6, NVIDIA BlueField and move on with 100, 200, 400 GbE intranets.

              These used cards are anyway LESS EXPENSIVE than the "shiny new consumer" 10 GbE cards! I got 2-port ConnectX-3 for as low as $45 and ~$80 for the ConnectX-3 Pro back in 2018-2019. Funny fact: their prices have increased over last year. Must I conclude that the shortage of silicon in 2020 and 2021 has travelled back in time and affected the manufacturing of these cards 15 years ago?

              Same story with the switches. I started with a QDR switch (40 Gbps which translates in ~30GbE with IPoIB, $100), then an FDR10 (~38 GbE IPoIB, $150) and finally an FDR (56 Gbps which translates in 45+ GbE IPoIB). The later is a 36-port and I got it for $200. Look for the price of a "shiny new consumer" 36-port 10 GbE. I rest my case. If you want to only connect two PCs you don't need a switch: only two cards and a cable.

              For the QSFP[+][14] cables just wait for the good deal to appear. I got 3-meter DAC cables for < $10 a unit buying a 12-cable lot (only one had a defect and slower speed.) It's just a question of patience. For more than 5-meter, go with optical cables, more expensive but still manageable.

              For more details, with an introduction and a hands-on go there: https://magazine.odroid.com/article/...the-odroid-h2/ which is a general article about to use such a card on a Hard Kernel Odroid H2+. Follow-up there: https://forum.odroid.com/viewtopic.php?f=172&t=38711. These links above refer to an "exotic" usage with an SBC but most of the text is applicable to desktop PC.

              To reach the 45+ GbE you need a modern PC with PCIe Gen 3 x8 (electrical). An x16 will do and an x16 bifurcated in x8 (electrical) + x8 (electrical) will also do. Any reasonable 4 or more cores desktop CPU will do. An INTEL Core 9th gen does the job, same for AMD Zen+ or later. Earlier CPU might cough a little and top somewhere between 30 and 40 GbE. The main point is PCIe Gen 3 x8 (electrical). I have an older PC with PCIe Gen 2 and I believe an INTEL Core 3rd Gen, it tops at ~21 GbE. The Odroid H2+ previously mentionned with a 4-core Celeron J4105 tops at ~14 GbE (PCIe Gen 2 and x4 instead of x8).

              You can also find 100 GbE used cards on eBay, but the lowest price for a ConnectX-4 is still ~$250+ per card. Let's wait 15 more years when all data centers move up to 800 GbE or 1 TbE and finally dump their 100+ GbE hardware...

              Most of the ATX, mATX consumer mobos only have PCIe 24 lanes usable with a desktop CPU. You can use a graphics card in the first x16 and such a NIC in the second x16 (but x8 electrical). Depending on the motherboard you might want to swap the cards. With one mobo I had LnkSta Width 16 on the graphics card but only LnkSta Width 4 for the NIC. After swapping the cards I had Width 8 on both, go figure. With a mini-ITX use an INTEL (with iGPU) or an AMD G series and plug the NIC in the unique x16 (x8 electrical up to 4000 series; fully x16 electrical starting with 5000 series, which if the mobo BIOS provides it, you can bifurcate with x8 + x8, meaning you can use the NIC and another x8 card).

              The Mellanox (now NVIDIA) site provides the driver set for Windows and Linux with archived versions supporting the ConnectX-3. Linux also has RDMA and the drivers which you can install with yum, apt, etc. At the beginning I was using the Mellanox version, now I'm using the set coming with the Kernel (which is anyway also from Mellanox, just minus extra tools and utilities.) There is a new Python package being in the works for RDMA: see https://github.com/linux-rdma/rdma-c...master/pyverbs, otherwise the "grand-father" of all is: https://github.com/jgunthorpe/python-rdma.

              So while waiting for SMB direct on Linux (very glad it shows up!) I've been using 30+ then 45+ GbE for two years, now. Samba is on top of IPoIB. NFS in addition supports RDMA.

              Any IP-based app will work with IPoIB. To use RDMA directly, you need an app that supports RDMA, like NFS for instance.

              HTH
              Last edited by domih; 30 August 2021, 08:28 AM.

              Comment


              • #8
                Originally posted by q2dg View Post
                Why not NFS?
                Probably because for every NFS user there are millions of SMB users. Besides SMB is not great, but NFS is positively revolting.
                Last edited by jacob; 30 August 2021, 11:36 PM.

                Comment


                • #9
                  and kdbus was rejected because better performance does not justify kernel integration...

                  Comment


                  • #10
                    I don't care 1 way or the other but why the hate for nfs?

                    Comment

                    Working...
                    X